Dec  2 19:19:24 np0005543037 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  2 19:19:24 np0005543037 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  2 19:19:24 np0005543037 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 19:19:24 np0005543037 kernel: BIOS-provided physical RAM map:
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  2 19:19:24 np0005543037 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  2 19:19:24 np0005543037 kernel: NX (Execute Disable) protection: active
Dec  2 19:19:24 np0005543037 kernel: APIC: Static calls initialized
Dec  2 19:19:24 np0005543037 kernel: SMBIOS 2.8 present.
Dec  2 19:19:24 np0005543037 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  2 19:19:24 np0005543037 kernel: Hypervisor detected: KVM
Dec  2 19:19:24 np0005543037 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  2 19:19:24 np0005543037 kernel: kvm-clock: using sched offset of 4773481280 cycles
Dec  2 19:19:24 np0005543037 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  2 19:19:24 np0005543037 kernel: tsc: Detected 2800.000 MHz processor
Dec  2 19:19:24 np0005543037 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  2 19:19:24 np0005543037 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  2 19:19:24 np0005543037 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  2 19:19:24 np0005543037 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  2 19:19:24 np0005543037 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  2 19:19:24 np0005543037 kernel: Using GB pages for direct mapping
Dec  2 19:19:24 np0005543037 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  2 19:19:24 np0005543037 kernel: ACPI: Early table checksum verification disabled
Dec  2 19:19:24 np0005543037 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  2 19:19:24 np0005543037 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 19:19:24 np0005543037 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 19:19:24 np0005543037 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 19:19:24 np0005543037 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  2 19:19:24 np0005543037 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 19:19:24 np0005543037 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  2 19:19:24 np0005543037 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  2 19:19:24 np0005543037 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  2 19:19:24 np0005543037 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  2 19:19:24 np0005543037 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  2 19:19:24 np0005543037 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  2 19:19:24 np0005543037 kernel: No NUMA configuration found
Dec  2 19:19:24 np0005543037 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  2 19:19:24 np0005543037 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec  2 19:19:24 np0005543037 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  2 19:19:24 np0005543037 kernel: Zone ranges:
Dec  2 19:19:24 np0005543037 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  2 19:19:24 np0005543037 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  2 19:19:24 np0005543037 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  2 19:19:24 np0005543037 kernel:  Device   empty
Dec  2 19:19:24 np0005543037 kernel: Movable zone start for each node
Dec  2 19:19:24 np0005543037 kernel: Early memory node ranges
Dec  2 19:19:24 np0005543037 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  2 19:19:24 np0005543037 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  2 19:19:24 np0005543037 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  2 19:19:24 np0005543037 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  2 19:19:24 np0005543037 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  2 19:19:24 np0005543037 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  2 19:19:24 np0005543037 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  2 19:19:24 np0005543037 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  2 19:19:24 np0005543037 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  2 19:19:24 np0005543037 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  2 19:19:24 np0005543037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  2 19:19:24 np0005543037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  2 19:19:24 np0005543037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  2 19:19:24 np0005543037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  2 19:19:24 np0005543037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  2 19:19:24 np0005543037 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  2 19:19:24 np0005543037 kernel: TSC deadline timer available
Dec  2 19:19:24 np0005543037 kernel: CPU topo: Max. logical packages:   8
Dec  2 19:19:24 np0005543037 kernel: CPU topo: Max. logical dies:       8
Dec  2 19:19:24 np0005543037 kernel: CPU topo: Max. dies per package:   1
Dec  2 19:19:24 np0005543037 kernel: CPU topo: Max. threads per core:   1
Dec  2 19:19:24 np0005543037 kernel: CPU topo: Num. cores per package:     1
Dec  2 19:19:24 np0005543037 kernel: CPU topo: Num. threads per package:   1
Dec  2 19:19:24 np0005543037 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  2 19:19:24 np0005543037 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  2 19:19:24 np0005543037 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  2 19:19:24 np0005543037 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  2 19:19:24 np0005543037 kernel: Booting paravirtualized kernel on KVM
Dec  2 19:19:24 np0005543037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  2 19:19:24 np0005543037 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  2 19:19:24 np0005543037 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  2 19:19:24 np0005543037 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  2 19:19:24 np0005543037 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 19:19:24 np0005543037 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  2 19:19:24 np0005543037 kernel: random: crng init done
Dec  2 19:19:24 np0005543037 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: Fallback order for Node 0: 0 
Dec  2 19:19:24 np0005543037 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  2 19:19:24 np0005543037 kernel: Policy zone: Normal
Dec  2 19:19:24 np0005543037 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  2 19:19:24 np0005543037 kernel: software IO TLB: area num 8.
Dec  2 19:19:24 np0005543037 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  2 19:19:24 np0005543037 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  2 19:19:24 np0005543037 kernel: ftrace: allocated 193 pages with 3 groups
Dec  2 19:19:24 np0005543037 kernel: Dynamic Preempt: voluntary
Dec  2 19:19:24 np0005543037 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  2 19:19:24 np0005543037 kernel: rcu: #011RCU event tracing is enabled.
Dec  2 19:19:24 np0005543037 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  2 19:19:24 np0005543037 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  2 19:19:24 np0005543037 kernel: #011Rude variant of Tasks RCU enabled.
Dec  2 19:19:24 np0005543037 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  2 19:19:24 np0005543037 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  2 19:19:24 np0005543037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  2 19:19:24 np0005543037 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 19:19:24 np0005543037 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 19:19:24 np0005543037 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  2 19:19:24 np0005543037 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  2 19:19:24 np0005543037 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  2 19:19:24 np0005543037 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  2 19:19:24 np0005543037 kernel: Console: colour VGA+ 80x25
Dec  2 19:19:24 np0005543037 kernel: printk: console [ttyS0] enabled
Dec  2 19:19:24 np0005543037 kernel: ACPI: Core revision 20230331
Dec  2 19:19:24 np0005543037 kernel: APIC: Switch to symmetric I/O mode setup
Dec  2 19:19:24 np0005543037 kernel: x2apic enabled
Dec  2 19:19:24 np0005543037 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  2 19:19:24 np0005543037 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  2 19:19:24 np0005543037 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec  2 19:19:24 np0005543037 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  2 19:19:24 np0005543037 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  2 19:19:24 np0005543037 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  2 19:19:24 np0005543037 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  2 19:19:24 np0005543037 kernel: Spectre V2 : Mitigation: Retpolines
Dec  2 19:19:24 np0005543037 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  2 19:19:24 np0005543037 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  2 19:19:24 np0005543037 kernel: RETBleed: Mitigation: untrained return thunk
Dec  2 19:19:24 np0005543037 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  2 19:19:24 np0005543037 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  2 19:19:24 np0005543037 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  2 19:19:24 np0005543037 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  2 19:19:24 np0005543037 kernel: x86/bugs: return thunk changed
Dec  2 19:19:24 np0005543037 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  2 19:19:24 np0005543037 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  2 19:19:24 np0005543037 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  2 19:19:24 np0005543037 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  2 19:19:24 np0005543037 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  2 19:19:24 np0005543037 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  2 19:19:24 np0005543037 kernel: Freeing SMP alternatives memory: 40K
Dec  2 19:19:24 np0005543037 kernel: pid_max: default: 32768 minimum: 301
Dec  2 19:19:24 np0005543037 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  2 19:19:24 np0005543037 kernel: landlock: Up and running.
Dec  2 19:19:24 np0005543037 kernel: Yama: becoming mindful.
Dec  2 19:19:24 np0005543037 kernel: SELinux:  Initializing.
Dec  2 19:19:24 np0005543037 kernel: LSM support for eBPF active
Dec  2 19:19:24 np0005543037 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  2 19:19:24 np0005543037 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  2 19:19:24 np0005543037 kernel: ... version:                0
Dec  2 19:19:24 np0005543037 kernel: ... bit width:              48
Dec  2 19:19:24 np0005543037 kernel: ... generic registers:      6
Dec  2 19:19:24 np0005543037 kernel: ... value mask:             0000ffffffffffff
Dec  2 19:19:24 np0005543037 kernel: ... max period:             00007fffffffffff
Dec  2 19:19:24 np0005543037 kernel: ... fixed-purpose events:   0
Dec  2 19:19:24 np0005543037 kernel: ... event mask:             000000000000003f
Dec  2 19:19:24 np0005543037 kernel: signal: max sigframe size: 1776
Dec  2 19:19:24 np0005543037 kernel: rcu: Hierarchical SRCU implementation.
Dec  2 19:19:24 np0005543037 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  2 19:19:24 np0005543037 kernel: smp: Bringing up secondary CPUs ...
Dec  2 19:19:24 np0005543037 kernel: smpboot: x86: Booting SMP configuration:
Dec  2 19:19:24 np0005543037 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  2 19:19:24 np0005543037 kernel: smp: Brought up 1 node, 8 CPUs
Dec  2 19:19:24 np0005543037 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec  2 19:19:24 np0005543037 kernel: node 0 deferred pages initialised in 248ms
Dec  2 19:19:24 np0005543037 kernel: Memory: 7763848K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618212K reserved, 0K cma-reserved)
Dec  2 19:19:24 np0005543037 kernel: devtmpfs: initialized
Dec  2 19:19:24 np0005543037 kernel: x86/mm: Memory block size: 128MB
Dec  2 19:19:24 np0005543037 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  2 19:19:24 np0005543037 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  2 19:19:24 np0005543037 kernel: pinctrl core: initialized pinctrl subsystem
Dec  2 19:19:24 np0005543037 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  2 19:19:24 np0005543037 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  2 19:19:24 np0005543037 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  2 19:19:24 np0005543037 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  2 19:19:24 np0005543037 kernel: audit: initializing netlink subsys (disabled)
Dec  2 19:19:24 np0005543037 kernel: audit: type=2000 audit(1764721160.598:1): state=initialized audit_enabled=0 res=1
Dec  2 19:19:24 np0005543037 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  2 19:19:24 np0005543037 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  2 19:19:24 np0005543037 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  2 19:19:24 np0005543037 kernel: cpuidle: using governor menu
Dec  2 19:19:24 np0005543037 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  2 19:19:24 np0005543037 kernel: PCI: Using configuration type 1 for base access
Dec  2 19:19:24 np0005543037 kernel: PCI: Using configuration type 1 for extended access
Dec  2 19:19:24 np0005543037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  2 19:19:24 np0005543037 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  2 19:19:24 np0005543037 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  2 19:19:24 np0005543037 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  2 19:19:24 np0005543037 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  2 19:19:24 np0005543037 kernel: Demotion targets for Node 0: null
Dec  2 19:19:24 np0005543037 kernel: cryptd: max_cpu_qlen set to 1000
Dec  2 19:19:24 np0005543037 kernel: ACPI: Added _OSI(Module Device)
Dec  2 19:19:24 np0005543037 kernel: ACPI: Added _OSI(Processor Device)
Dec  2 19:19:24 np0005543037 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  2 19:19:24 np0005543037 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  2 19:19:24 np0005543037 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  2 19:19:24 np0005543037 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  2 19:19:24 np0005543037 kernel: ACPI: Interpreter enabled
Dec  2 19:19:24 np0005543037 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  2 19:19:24 np0005543037 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  2 19:19:24 np0005543037 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  2 19:19:24 np0005543037 kernel: PCI: Using E820 reservations for host bridge windows
Dec  2 19:19:24 np0005543037 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  2 19:19:24 np0005543037 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  2 19:19:24 np0005543037 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [3] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [4] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [5] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [6] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [7] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [8] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [9] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [10] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [11] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [12] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [13] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [14] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [15] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [16] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [17] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [18] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [19] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [20] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [21] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [22] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [23] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [24] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [25] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [26] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [27] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [28] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [29] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [30] registered
Dec  2 19:19:24 np0005543037 kernel: acpiphp: Slot [31] registered
Dec  2 19:19:24 np0005543037 kernel: PCI host bridge to bus 0000:00
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  2 19:19:24 np0005543037 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  2 19:19:24 np0005543037 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  2 19:19:24 np0005543037 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  2 19:19:24 np0005543037 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  2 19:19:24 np0005543037 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  2 19:19:24 np0005543037 kernel: iommu: Default domain type: Translated
Dec  2 19:19:24 np0005543037 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  2 19:19:24 np0005543037 kernel: SCSI subsystem initialized
Dec  2 19:19:24 np0005543037 kernel: ACPI: bus type USB registered
Dec  2 19:19:24 np0005543037 kernel: usbcore: registered new interface driver usbfs
Dec  2 19:19:24 np0005543037 kernel: usbcore: registered new interface driver hub
Dec  2 19:19:24 np0005543037 kernel: usbcore: registered new device driver usb
Dec  2 19:19:24 np0005543037 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  2 19:19:24 np0005543037 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  2 19:19:24 np0005543037 kernel: PTP clock support registered
Dec  2 19:19:24 np0005543037 kernel: EDAC MC: Ver: 3.0.0
Dec  2 19:19:24 np0005543037 kernel: NetLabel: Initializing
Dec  2 19:19:24 np0005543037 kernel: NetLabel:  domain hash size = 128
Dec  2 19:19:24 np0005543037 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  2 19:19:24 np0005543037 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  2 19:19:24 np0005543037 kernel: PCI: Using ACPI for IRQ routing
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  2 19:19:24 np0005543037 kernel: vgaarb: loaded
Dec  2 19:19:24 np0005543037 kernel: clocksource: Switched to clocksource kvm-clock
Dec  2 19:19:24 np0005543037 kernel: VFS: Disk quotas dquot_6.6.0
Dec  2 19:19:24 np0005543037 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  2 19:19:24 np0005543037 kernel: pnp: PnP ACPI init
Dec  2 19:19:24 np0005543037 kernel: pnp: PnP ACPI: found 5 devices
Dec  2 19:19:24 np0005543037 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  2 19:19:24 np0005543037 kernel: NET: Registered PF_INET protocol family
Dec  2 19:19:24 np0005543037 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  2 19:19:24 np0005543037 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  2 19:19:24 np0005543037 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  2 19:19:24 np0005543037 kernel: NET: Registered PF_XDP protocol family
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  2 19:19:24 np0005543037 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  2 19:19:24 np0005543037 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  2 19:19:24 np0005543037 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 96761 usecs
Dec  2 19:19:24 np0005543037 kernel: PCI: CLS 0 bytes, default 64
Dec  2 19:19:24 np0005543037 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  2 19:19:24 np0005543037 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  2 19:19:24 np0005543037 kernel: Trying to unpack rootfs image as initramfs...
Dec  2 19:19:24 np0005543037 kernel: ACPI: bus type thunderbolt registered
Dec  2 19:19:24 np0005543037 kernel: Initialise system trusted keyrings
Dec  2 19:19:24 np0005543037 kernel: Key type blacklist registered
Dec  2 19:19:24 np0005543037 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  2 19:19:24 np0005543037 kernel: zbud: loaded
Dec  2 19:19:24 np0005543037 kernel: integrity: Platform Keyring initialized
Dec  2 19:19:24 np0005543037 kernel: integrity: Machine keyring initialized
Dec  2 19:19:24 np0005543037 kernel: Freeing initrd memory: 87804K
Dec  2 19:19:24 np0005543037 kernel: NET: Registered PF_ALG protocol family
Dec  2 19:19:24 np0005543037 kernel: xor: automatically using best checksumming function   avx       
Dec  2 19:19:24 np0005543037 kernel: Key type asymmetric registered
Dec  2 19:19:24 np0005543037 kernel: Asymmetric key parser 'x509' registered
Dec  2 19:19:24 np0005543037 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  2 19:19:24 np0005543037 kernel: io scheduler mq-deadline registered
Dec  2 19:19:24 np0005543037 kernel: io scheduler kyber registered
Dec  2 19:19:24 np0005543037 kernel: io scheduler bfq registered
Dec  2 19:19:24 np0005543037 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  2 19:19:24 np0005543037 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  2 19:19:24 np0005543037 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  2 19:19:24 np0005543037 kernel: ACPI: button: Power Button [PWRF]
Dec  2 19:19:24 np0005543037 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  2 19:19:24 np0005543037 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  2 19:19:24 np0005543037 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  2 19:19:24 np0005543037 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  2 19:19:24 np0005543037 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  2 19:19:24 np0005543037 kernel: Non-volatile memory driver v1.3
Dec  2 19:19:24 np0005543037 kernel: rdac: device handler registered
Dec  2 19:19:24 np0005543037 kernel: hp_sw: device handler registered
Dec  2 19:19:24 np0005543037 kernel: emc: device handler registered
Dec  2 19:19:24 np0005543037 kernel: alua: device handler registered
Dec  2 19:19:24 np0005543037 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  2 19:19:24 np0005543037 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  2 19:19:24 np0005543037 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  2 19:19:24 np0005543037 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  2 19:19:24 np0005543037 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  2 19:19:24 np0005543037 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  2 19:19:24 np0005543037 kernel: usb usb1: Product: UHCI Host Controller
Dec  2 19:19:24 np0005543037 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  2 19:19:24 np0005543037 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  2 19:19:24 np0005543037 kernel: hub 1-0:1.0: USB hub found
Dec  2 19:19:24 np0005543037 kernel: hub 1-0:1.0: 2 ports detected
Dec  2 19:19:24 np0005543037 kernel: usbcore: registered new interface driver usbserial_generic
Dec  2 19:19:24 np0005543037 kernel: usbserial: USB Serial support registered for generic
Dec  2 19:19:24 np0005543037 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  2 19:19:24 np0005543037 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  2 19:19:24 np0005543037 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  2 19:19:24 np0005543037 kernel: mousedev: PS/2 mouse device common for all mice
Dec  2 19:19:24 np0005543037 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  2 19:19:24 np0005543037 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  2 19:19:24 np0005543037 kernel: rtc_cmos 00:04: registered as rtc0
Dec  2 19:19:24 np0005543037 kernel: rtc_cmos 00:04: setting system clock to 2025-12-03T00:19:23 UTC (1764721163)
Dec  2 19:19:24 np0005543037 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  2 19:19:24 np0005543037 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  2 19:19:24 np0005543037 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  2 19:19:24 np0005543037 kernel: usbcore: registered new interface driver usbhid
Dec  2 19:19:24 np0005543037 kernel: usbhid: USB HID core driver
Dec  2 19:19:24 np0005543037 kernel: drop_monitor: Initializing network drop monitor service
Dec  2 19:19:24 np0005543037 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  2 19:19:24 np0005543037 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  2 19:19:24 np0005543037 kernel: Initializing XFRM netlink socket
Dec  2 19:19:24 np0005543037 kernel: NET: Registered PF_INET6 protocol family
Dec  2 19:19:24 np0005543037 kernel: Segment Routing with IPv6
Dec  2 19:19:24 np0005543037 kernel: NET: Registered PF_PACKET protocol family
Dec  2 19:19:24 np0005543037 kernel: mpls_gso: MPLS GSO support
Dec  2 19:19:24 np0005543037 kernel: IPI shorthand broadcast: enabled
Dec  2 19:19:24 np0005543037 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  2 19:19:24 np0005543037 kernel: AES CTR mode by8 optimization enabled
Dec  2 19:19:24 np0005543037 kernel: sched_clock: Marking stable (3248008139, 140160430)->(3532029959, -143861390)
Dec  2 19:19:24 np0005543037 kernel: registered taskstats version 1
Dec  2 19:19:24 np0005543037 kernel: Loading compiled-in X.509 certificates
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  2 19:19:24 np0005543037 kernel: Demotion targets for Node 0: null
Dec  2 19:19:24 np0005543037 kernel: page_owner is disabled
Dec  2 19:19:24 np0005543037 kernel: Key type .fscrypt registered
Dec  2 19:19:24 np0005543037 kernel: Key type fscrypt-provisioning registered
Dec  2 19:19:24 np0005543037 kernel: Key type big_key registered
Dec  2 19:19:24 np0005543037 kernel: Key type encrypted registered
Dec  2 19:19:24 np0005543037 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  2 19:19:24 np0005543037 kernel: Loading compiled-in module X.509 certificates
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  2 19:19:24 np0005543037 kernel: ima: Allocated hash algorithm: sha256
Dec  2 19:19:24 np0005543037 kernel: ima: No architecture policies found
Dec  2 19:19:24 np0005543037 kernel: evm: Initialising EVM extended attributes:
Dec  2 19:19:24 np0005543037 kernel: evm: security.selinux
Dec  2 19:19:24 np0005543037 kernel: evm: security.SMACK64 (disabled)
Dec  2 19:19:24 np0005543037 kernel: evm: security.SMACK64EXEC (disabled)
Dec  2 19:19:24 np0005543037 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  2 19:19:24 np0005543037 kernel: evm: security.SMACK64MMAP (disabled)
Dec  2 19:19:24 np0005543037 kernel: evm: security.apparmor (disabled)
Dec  2 19:19:24 np0005543037 kernel: evm: security.ima
Dec  2 19:19:24 np0005543037 kernel: evm: security.capability
Dec  2 19:19:24 np0005543037 kernel: evm: HMAC attrs: 0x1
Dec  2 19:19:24 np0005543037 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  2 19:19:24 np0005543037 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  2 19:19:24 np0005543037 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  2 19:19:24 np0005543037 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  2 19:19:24 np0005543037 kernel: usb 1-1: Manufacturer: QEMU
Dec  2 19:19:24 np0005543037 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  2 19:19:24 np0005543037 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  2 19:19:24 np0005543037 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  2 19:19:24 np0005543037 kernel: Running certificate verification RSA selftest
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  2 19:19:24 np0005543037 kernel: Running certificate verification ECDSA selftest
Dec  2 19:19:24 np0005543037 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  2 19:19:24 np0005543037 kernel: clk: Disabling unused clocks
Dec  2 19:19:24 np0005543037 kernel: Freeing unused decrypted memory: 2028K
Dec  2 19:19:24 np0005543037 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  2 19:19:24 np0005543037 kernel: Write protecting the kernel read-only data: 30720k
Dec  2 19:19:24 np0005543037 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  2 19:19:24 np0005543037 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  2 19:19:24 np0005543037 kernel: Run /init as init process
Dec  2 19:19:24 np0005543037 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  2 19:19:24 np0005543037 systemd: Detected virtualization kvm.
Dec  2 19:19:24 np0005543037 systemd: Detected architecture x86-64.
Dec  2 19:19:24 np0005543037 systemd: Running in initrd.
Dec  2 19:19:24 np0005543037 systemd: No hostname configured, using default hostname.
Dec  2 19:19:24 np0005543037 systemd: Hostname set to <localhost>.
Dec  2 19:19:24 np0005543037 systemd: Initializing machine ID from VM UUID.
Dec  2 19:19:24 np0005543037 systemd: Queued start job for default target Initrd Default Target.
Dec  2 19:19:24 np0005543037 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  2 19:19:24 np0005543037 systemd: Reached target Local Encrypted Volumes.
Dec  2 19:19:24 np0005543037 systemd: Reached target Initrd /usr File System.
Dec  2 19:19:24 np0005543037 systemd: Reached target Local File Systems.
Dec  2 19:19:24 np0005543037 systemd: Reached target Path Units.
Dec  2 19:19:24 np0005543037 systemd: Reached target Slice Units.
Dec  2 19:19:24 np0005543037 systemd: Reached target Swaps.
Dec  2 19:19:24 np0005543037 systemd: Reached target Timer Units.
Dec  2 19:19:24 np0005543037 systemd: Listening on D-Bus System Message Bus Socket.
Dec  2 19:19:24 np0005543037 systemd: Listening on Journal Socket (/dev/log).
Dec  2 19:19:24 np0005543037 systemd: Listening on Journal Socket.
Dec  2 19:19:24 np0005543037 systemd: Listening on udev Control Socket.
Dec  2 19:19:24 np0005543037 systemd: Listening on udev Kernel Socket.
Dec  2 19:19:24 np0005543037 systemd: Reached target Socket Units.
Dec  2 19:19:24 np0005543037 systemd: Starting Create List of Static Device Nodes...
Dec  2 19:19:24 np0005543037 systemd: Starting Journal Service...
Dec  2 19:19:24 np0005543037 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  2 19:19:24 np0005543037 systemd: Starting Apply Kernel Variables...
Dec  2 19:19:24 np0005543037 systemd: Starting Create System Users...
Dec  2 19:19:24 np0005543037 systemd: Starting Setup Virtual Console...
Dec  2 19:19:24 np0005543037 systemd: Finished Create List of Static Device Nodes.
Dec  2 19:19:24 np0005543037 systemd: Finished Apply Kernel Variables.
Dec  2 19:19:24 np0005543037 systemd-journald[306]: Journal started
Dec  2 19:19:24 np0005543037 systemd-journald[306]: Runtime Journal (/run/log/journal/bb85f21b9f67464f8fbee50d4e1e7eb4) is 8.0M, max 153.6M, 145.6M free.
Dec  2 19:19:24 np0005543037 systemd-sysusers[311]: Creating group 'users' with GID 100.
Dec  2 19:19:24 np0005543037 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Dec  2 19:19:24 np0005543037 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  2 19:19:24 np0005543037 systemd: Started Journal Service.
Dec  2 19:19:24 np0005543037 systemd[1]: Finished Create System Users.
Dec  2 19:19:24 np0005543037 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  2 19:19:24 np0005543037 systemd[1]: Starting Create Volatile Files and Directories...
Dec  2 19:19:24 np0005543037 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  2 19:19:24 np0005543037 systemd[1]: Finished Create Volatile Files and Directories.
Dec  2 19:19:24 np0005543037 systemd[1]: Finished Setup Virtual Console.
Dec  2 19:19:24 np0005543037 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  2 19:19:24 np0005543037 systemd[1]: Starting dracut cmdline hook...
Dec  2 19:19:24 np0005543037 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec  2 19:19:24 np0005543037 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  2 19:19:24 np0005543037 systemd[1]: Finished dracut cmdline hook.
Dec  2 19:19:24 np0005543037 systemd[1]: Starting dracut pre-udev hook...
Dec  2 19:19:24 np0005543037 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  2 19:19:24 np0005543037 kernel: device-mapper: uevent: version 1.0.3
Dec  2 19:19:24 np0005543037 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  2 19:19:25 np0005543037 kernel: RPC: Registered named UNIX socket transport module.
Dec  2 19:19:25 np0005543037 kernel: RPC: Registered udp transport module.
Dec  2 19:19:25 np0005543037 kernel: RPC: Registered tcp transport module.
Dec  2 19:19:25 np0005543037 kernel: RPC: Registered tcp-with-tls transport module.
Dec  2 19:19:25 np0005543037 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  2 19:19:25 np0005543037 rpc.statd[445]: Version 2.5.4 starting
Dec  2 19:19:25 np0005543037 rpc.statd[445]: Initializing NSM state
Dec  2 19:19:25 np0005543037 rpc.idmapd[450]: Setting log level to 0
Dec  2 19:19:25 np0005543037 systemd[1]: Finished dracut pre-udev hook.
Dec  2 19:19:25 np0005543037 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  2 19:19:25 np0005543037 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Dec  2 19:19:25 np0005543037 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  2 19:19:25 np0005543037 systemd[1]: Starting dracut pre-trigger hook...
Dec  2 19:19:25 np0005543037 systemd[1]: Finished dracut pre-trigger hook.
Dec  2 19:19:25 np0005543037 systemd[1]: Starting Coldplug All udev Devices...
Dec  2 19:19:25 np0005543037 systemd[1]: Created slice Slice /system/modprobe.
Dec  2 19:19:25 np0005543037 systemd[1]: Starting Load Kernel Module configfs...
Dec  2 19:19:25 np0005543037 systemd[1]: Finished Coldplug All udev Devices.
Dec  2 19:19:25 np0005543037 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 19:19:25 np0005543037 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 19:19:25 np0005543037 systemd[1]: Mounting Kernel Configuration File System...
Dec  2 19:19:25 np0005543037 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  2 19:19:25 np0005543037 systemd[1]: Reached target Network.
Dec  2 19:19:25 np0005543037 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  2 19:19:25 np0005543037 systemd[1]: Starting dracut initqueue hook...
Dec  2 19:19:25 np0005543037 systemd[1]: Mounted Kernel Configuration File System.
Dec  2 19:19:25 np0005543037 systemd[1]: Reached target System Initialization.
Dec  2 19:19:25 np0005543037 systemd[1]: Reached target Basic System.
Dec  2 19:19:25 np0005543037 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  2 19:19:25 np0005543037 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  2 19:19:25 np0005543037 kernel: vda: vda1
Dec  2 19:19:25 np0005543037 kernel: scsi host0: ata_piix
Dec  2 19:19:25 np0005543037 kernel: scsi host1: ata_piix
Dec  2 19:19:25 np0005543037 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  2 19:19:25 np0005543037 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  2 19:19:25 np0005543037 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  2 19:19:25 np0005543037 systemd[1]: Reached target Initrd Root Device.
Dec  2 19:19:25 np0005543037 kernel: ata1: found unknown device (class 0)
Dec  2 19:19:25 np0005543037 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  2 19:19:25 np0005543037 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  2 19:19:25 np0005543037 systemd-udevd[496]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 19:19:25 np0005543037 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  2 19:19:25 np0005543037 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  2 19:19:25 np0005543037 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  2 19:19:25 np0005543037 systemd[1]: Finished dracut initqueue hook.
Dec  2 19:19:25 np0005543037 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  2 19:19:25 np0005543037 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  2 19:19:25 np0005543037 systemd[1]: Reached target Remote File Systems.
Dec  2 19:19:25 np0005543037 systemd[1]: Starting dracut pre-mount hook...
Dec  2 19:19:25 np0005543037 systemd[1]: Finished dracut pre-mount hook.
Dec  2 19:19:25 np0005543037 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  2 19:19:25 np0005543037 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Dec  2 19:19:25 np0005543037 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  2 19:19:25 np0005543037 systemd[1]: Mounting /sysroot...
Dec  2 19:19:26 np0005543037 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  2 19:19:26 np0005543037 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  2 19:19:26 np0005543037 kernel: XFS (vda1): Ending clean mount
Dec  2 19:19:26 np0005543037 systemd[1]: Mounted /sysroot.
Dec  2 19:19:26 np0005543037 systemd[1]: Reached target Initrd Root File System.
Dec  2 19:19:26 np0005543037 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  2 19:19:26 np0005543037 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  2 19:19:26 np0005543037 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  2 19:19:26 np0005543037 systemd[1]: Reached target Initrd File Systems.
Dec  2 19:19:26 np0005543037 systemd[1]: Reached target Initrd Default Target.
Dec  2 19:19:26 np0005543037 systemd[1]: Starting dracut mount hook...
Dec  2 19:19:26 np0005543037 systemd[1]: Finished dracut mount hook.
Dec  2 19:19:26 np0005543037 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  2 19:19:27 np0005543037 rpc.idmapd[450]: exiting on signal 15
Dec  2 19:19:27 np0005543037 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  2 19:19:27 np0005543037 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Network.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Timer Units.
Dec  2 19:19:27 np0005543037 systemd[1]: dbus.socket: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  2 19:19:27 np0005543037 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Initrd Default Target.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Basic System.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Initrd Root Device.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Initrd /usr File System.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Path Units.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Remote File Systems.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Slice Units.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Socket Units.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target System Initialization.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Local File Systems.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Swaps.
Dec  2 19:19:27 np0005543037 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped dracut mount hook.
Dec  2 19:19:27 np0005543037 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped dracut pre-mount hook.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  2 19:19:27 np0005543037 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped dracut initqueue hook.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Apply Kernel Variables.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Coldplug All udev Devices.
Dec  2 19:19:27 np0005543037 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped dracut pre-trigger hook.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Setup Virtual Console.
Dec  2 19:19:27 np0005543037 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-udevd.service: Consumed 1.046s CPU time.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Closed udev Control Socket.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Closed udev Kernel Socket.
Dec  2 19:19:27 np0005543037 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped dracut pre-udev hook.
Dec  2 19:19:27 np0005543037 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped dracut cmdline hook.
Dec  2 19:19:27 np0005543037 systemd[1]: Starting Cleanup udev Database...
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  2 19:19:27 np0005543037 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  2 19:19:27 np0005543037 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Stopped Create System Users.
Dec  2 19:19:27 np0005543037 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  2 19:19:27 np0005543037 systemd[1]: Finished Cleanup udev Database.
Dec  2 19:19:27 np0005543037 systemd[1]: Reached target Switch Root.
Dec  2 19:19:27 np0005543037 systemd[1]: Starting Switch Root...
Dec  2 19:19:27 np0005543037 systemd[1]: Switching root.
Dec  2 19:19:27 np0005543037 systemd-journald[306]: Journal stopped
Dec  2 19:19:28 np0005543037 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  2 19:19:28 np0005543037 kernel: audit: type=1404 audit(1764721167.254:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  2 19:19:28 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:19:28 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:19:28 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:19:28 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:19:28 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:19:28 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:19:28 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:19:28 np0005543037 kernel: audit: type=1403 audit(1764721167.408:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  2 19:19:28 np0005543037 systemd: Successfully loaded SELinux policy in 160.432ms.
Dec  2 19:19:28 np0005543037 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.726ms.
Dec  2 19:19:28 np0005543037 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  2 19:19:28 np0005543037 systemd: Detected virtualization kvm.
Dec  2 19:19:28 np0005543037 systemd: Detected architecture x86-64.
Dec  2 19:19:28 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:19:28 np0005543037 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  2 19:19:28 np0005543037 systemd: Stopped Switch Root.
Dec  2 19:19:28 np0005543037 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  2 19:19:28 np0005543037 systemd: Created slice Slice /system/getty.
Dec  2 19:19:28 np0005543037 systemd: Created slice Slice /system/serial-getty.
Dec  2 19:19:28 np0005543037 systemd: Created slice Slice /system/sshd-keygen.
Dec  2 19:19:28 np0005543037 systemd: Created slice User and Session Slice.
Dec  2 19:19:28 np0005543037 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  2 19:19:28 np0005543037 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  2 19:19:28 np0005543037 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  2 19:19:28 np0005543037 systemd: Reached target Local Encrypted Volumes.
Dec  2 19:19:28 np0005543037 systemd: Stopped target Switch Root.
Dec  2 19:19:28 np0005543037 systemd: Stopped target Initrd File Systems.
Dec  2 19:19:28 np0005543037 systemd: Stopped target Initrd Root File System.
Dec  2 19:19:28 np0005543037 systemd: Reached target Local Integrity Protected Volumes.
Dec  2 19:19:28 np0005543037 systemd: Reached target Path Units.
Dec  2 19:19:28 np0005543037 systemd: Reached target rpc_pipefs.target.
Dec  2 19:19:28 np0005543037 systemd: Reached target Slice Units.
Dec  2 19:19:28 np0005543037 systemd: Reached target Swaps.
Dec  2 19:19:28 np0005543037 systemd: Reached target Local Verity Protected Volumes.
Dec  2 19:19:28 np0005543037 systemd: Listening on RPCbind Server Activation Socket.
Dec  2 19:19:28 np0005543037 systemd: Reached target RPC Port Mapper.
Dec  2 19:19:28 np0005543037 systemd: Listening on Process Core Dump Socket.
Dec  2 19:19:28 np0005543037 systemd: Listening on initctl Compatibility Named Pipe.
Dec  2 19:19:28 np0005543037 systemd: Listening on udev Control Socket.
Dec  2 19:19:28 np0005543037 systemd: Listening on udev Kernel Socket.
Dec  2 19:19:28 np0005543037 systemd: Mounting Huge Pages File System...
Dec  2 19:19:28 np0005543037 systemd: Mounting POSIX Message Queue File System...
Dec  2 19:19:28 np0005543037 systemd: Mounting Kernel Debug File System...
Dec  2 19:19:28 np0005543037 systemd: Mounting Kernel Trace File System...
Dec  2 19:19:28 np0005543037 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  2 19:19:28 np0005543037 systemd: Starting Create List of Static Device Nodes...
Dec  2 19:19:28 np0005543037 systemd: Starting Load Kernel Module configfs...
Dec  2 19:19:28 np0005543037 systemd: Starting Load Kernel Module drm...
Dec  2 19:19:28 np0005543037 systemd: Starting Load Kernel Module efi_pstore...
Dec  2 19:19:28 np0005543037 systemd: Starting Load Kernel Module fuse...
Dec  2 19:19:28 np0005543037 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  2 19:19:28 np0005543037 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  2 19:19:28 np0005543037 systemd: Stopped File System Check on Root Device.
Dec  2 19:19:28 np0005543037 systemd: Stopped Journal Service.
Dec  2 19:19:28 np0005543037 systemd: Starting Journal Service...
Dec  2 19:19:28 np0005543037 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  2 19:19:28 np0005543037 systemd: Starting Generate network units from Kernel command line...
Dec  2 19:19:28 np0005543037 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 19:19:28 np0005543037 systemd: Starting Remount Root and Kernel File Systems...
Dec  2 19:19:28 np0005543037 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  2 19:19:28 np0005543037 systemd: Starting Apply Kernel Variables...
Dec  2 19:19:28 np0005543037 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  2 19:19:28 np0005543037 systemd: Starting Coldplug All udev Devices...
Dec  2 19:19:28 np0005543037 systemd: Mounted Huge Pages File System.
Dec  2 19:19:28 np0005543037 systemd: Mounted POSIX Message Queue File System.
Dec  2 19:19:28 np0005543037 systemd: Mounted Kernel Debug File System.
Dec  2 19:19:28 np0005543037 systemd: Mounted Kernel Trace File System.
Dec  2 19:19:28 np0005543037 systemd: Finished Create List of Static Device Nodes.
Dec  2 19:19:28 np0005543037 systemd: modprobe@configfs.service: Deactivated successfully.
Dec  2 19:19:28 np0005543037 systemd: Finished Load Kernel Module configfs.
Dec  2 19:19:28 np0005543037 systemd: modprobe@efi_pstore.service: Deactivated successfully.
Dec  2 19:19:28 np0005543037 systemd: Finished Load Kernel Module efi_pstore.
Dec  2 19:19:28 np0005543037 systemd: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  2 19:19:28 np0005543037 systemd: Finished Generate network units from Kernel command line.
Dec  2 19:19:28 np0005543037 systemd: Finished Remount Root and Kernel File Systems.
Dec  2 19:19:28 np0005543037 systemd: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  2 19:19:28 np0005543037 systemd: Starting Rebuild Hardware Database...
Dec  2 19:19:28 np0005543037 systemd: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  2 19:19:28 np0005543037 systemd: Starting Load/Save OS Random Seed...
Dec  2 19:19:28 np0005543037 systemd: Starting Create System Users...
Dec  2 19:19:28 np0005543037 systemd-journald[680]: Journal started
Dec  2 19:19:28 np0005543037 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  2 19:19:28 np0005543037 systemd[1]: Queued start job for default target Multi-User System.
Dec  2 19:19:28 np0005543037 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  2 19:19:28 np0005543037 systemd: Started Journal Service.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Load/Save OS Random Seed.
Dec  2 19:19:28 np0005543037 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  2 19:19:28 np0005543037 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  2 19:19:28 np0005543037 systemd-journald[680]: Received client request to flush runtime journal.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  2 19:19:28 np0005543037 kernel: ACPI: bus type drm_connector registered
Dec  2 19:19:28 np0005543037 kernel: fuse: init (API version 7.37)
Dec  2 19:19:28 np0005543037 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Load Kernel Module drm.
Dec  2 19:19:28 np0005543037 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Load Kernel Module fuse.
Dec  2 19:19:28 np0005543037 systemd[1]: Mounting FUSE Control File System...
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Apply Kernel Variables.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Create System Users.
Dec  2 19:19:28 np0005543037 systemd[1]: Mounted FUSE Control File System.
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Coldplug All udev Devices.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  2 19:19:28 np0005543037 systemd[1]: Reached target Preparation for Local File Systems.
Dec  2 19:19:28 np0005543037 systemd[1]: Reached target Local File Systems.
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  2 19:19:28 np0005543037 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  2 19:19:28 np0005543037 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  2 19:19:28 np0005543037 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Automatic Boot Loader Update...
Dec  2 19:19:28 np0005543037 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Create Volatile Files and Directories...
Dec  2 19:19:28 np0005543037 bootctl[700]: Couldn't find EFI system partition, skipping.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Automatic Boot Loader Update.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Create Volatile Files and Directories.
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Security Auditing Service...
Dec  2 19:19:28 np0005543037 systemd[1]: Starting RPC Bind...
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Rebuild Journal Catalog...
Dec  2 19:19:28 np0005543037 auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  2 19:19:28 np0005543037 auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  2 19:19:28 np0005543037 systemd[1]: Started RPC Bind.
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Rebuild Journal Catalog.
Dec  2 19:19:28 np0005543037 augenrules[711]: /sbin/augenrules: No change
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  2 19:19:28 np0005543037 augenrules[726]: No rules
Dec  2 19:19:28 np0005543037 augenrules[726]: enabled 1
Dec  2 19:19:28 np0005543037 augenrules[726]: failure 1
Dec  2 19:19:28 np0005543037 augenrules[726]: pid 706
Dec  2 19:19:28 np0005543037 augenrules[726]: rate_limit 0
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_limit 8192
Dec  2 19:19:28 np0005543037 augenrules[726]: lost 0
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog 3
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_wait_time 60000
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_wait_time_actual 0
Dec  2 19:19:28 np0005543037 augenrules[726]: enabled 1
Dec  2 19:19:28 np0005543037 augenrules[726]: failure 1
Dec  2 19:19:28 np0005543037 augenrules[726]: pid 706
Dec  2 19:19:28 np0005543037 augenrules[726]: rate_limit 0
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_limit 8192
Dec  2 19:19:28 np0005543037 augenrules[726]: lost 0
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog 2
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_wait_time 60000
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_wait_time_actual 0
Dec  2 19:19:28 np0005543037 augenrules[726]: enabled 1
Dec  2 19:19:28 np0005543037 augenrules[726]: failure 1
Dec  2 19:19:28 np0005543037 augenrules[726]: pid 706
Dec  2 19:19:28 np0005543037 augenrules[726]: rate_limit 0
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_limit 8192
Dec  2 19:19:28 np0005543037 augenrules[726]: lost 0
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog 1
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_wait_time 60000
Dec  2 19:19:28 np0005543037 augenrules[726]: backlog_wait_time_actual 0
Dec  2 19:19:28 np0005543037 systemd[1]: Started Security Auditing Service.
Dec  2 19:19:28 np0005543037 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  2 19:19:28 np0005543037 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  2 19:19:29 np0005543037 systemd[1]: Finished Rebuild Hardware Database.
Dec  2 19:19:29 np0005543037 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  2 19:19:29 np0005543037 systemd[1]: Starting Update is Completed...
Dec  2 19:19:29 np0005543037 systemd[1]: Finished Update is Completed.
Dec  2 19:19:29 np0005543037 systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Dec  2 19:19:29 np0005543037 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  2 19:19:29 np0005543037 systemd[1]: Reached target System Initialization.
Dec  2 19:19:29 np0005543037 systemd[1]: Started dnf makecache --timer.
Dec  2 19:19:29 np0005543037 systemd[1]: Started Daily rotation of log files.
Dec  2 19:19:29 np0005543037 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  2 19:19:29 np0005543037 systemd[1]: Reached target Timer Units.
Dec  2 19:19:29 np0005543037 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  2 19:19:29 np0005543037 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  2 19:19:29 np0005543037 systemd[1]: Reached target Socket Units.
Dec  2 19:19:29 np0005543037 systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 19:19:29 np0005543037 systemd[1]: Starting D-Bus System Message Bus...
Dec  2 19:19:29 np0005543037 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 19:19:29 np0005543037 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  2 19:19:29 np0005543037 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  2 19:19:29 np0005543037 systemd[1]: Starting Load Kernel Module configfs...
Dec  2 19:19:29 np0005543037 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  2 19:19:29 np0005543037 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  2 19:19:29 np0005543037 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  2 19:19:29 np0005543037 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  2 19:19:29 np0005543037 systemd[1]: Finished Load Kernel Module configfs.
Dec  2 19:19:29 np0005543037 systemd[1]: Started D-Bus System Message Bus.
Dec  2 19:19:29 np0005543037 systemd[1]: Reached target Basic System.
Dec  2 19:19:29 np0005543037 dbus-broker-lau[767]: Ready
Dec  2 19:19:29 np0005543037 systemd[1]: Starting NTP client/server...
Dec  2 19:19:29 np0005543037 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  2 19:19:29 np0005543037 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  2 19:19:29 np0005543037 systemd[1]: Starting IPv4 firewall with iptables...
Dec  2 19:19:29 np0005543037 systemd[1]: Started irqbalance daemon.
Dec  2 19:19:29 np0005543037 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  2 19:19:29 np0005543037 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 19:19:29 np0005543037 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 19:19:29 np0005543037 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 19:19:29 np0005543037 systemd[1]: Reached target sshd-keygen.target.
Dec  2 19:19:29 np0005543037 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  2 19:19:29 np0005543037 systemd[1]: Reached target User and Group Name Lookups.
Dec  2 19:19:29 np0005543037 systemd[1]: Starting User Login Management...
Dec  2 19:19:29 np0005543037 chronyd[799]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  2 19:19:29 np0005543037 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  2 19:19:29 np0005543037 chronyd[799]: Loaded 0 symmetric keys
Dec  2 19:19:29 np0005543037 chronyd[799]: Using right/UTC timezone to obtain leap second data
Dec  2 19:19:29 np0005543037 chronyd[799]: Loaded seccomp filter (level 2)
Dec  2 19:19:29 np0005543037 systemd[1]: Started NTP client/server.
Dec  2 19:19:29 np0005543037 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  2 19:19:29 np0005543037 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  2 19:19:29 np0005543037 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  2 19:19:29 np0005543037 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  2 19:19:29 np0005543037 kernel: kvm_amd: TSC scaling supported
Dec  2 19:19:29 np0005543037 kernel: kvm_amd: Nested Virtualization enabled
Dec  2 19:19:29 np0005543037 kernel: kvm_amd: Nested Paging enabled
Dec  2 19:19:29 np0005543037 kernel: kvm_amd: LBR virtualization supported
Dec  2 19:19:29 np0005543037 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  2 19:19:29 np0005543037 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  2 19:19:29 np0005543037 kernel: Console: switching to colour dummy device 80x25
Dec  2 19:19:29 np0005543037 systemd-logind[800]: New seat seat0.
Dec  2 19:19:29 np0005543037 systemd[1]: Started User Login Management.
Dec  2 19:19:29 np0005543037 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  2 19:19:29 np0005543037 kernel: [drm] features: -context_init
Dec  2 19:19:29 np0005543037 kernel: [drm] number of scanouts: 1
Dec  2 19:19:29 np0005543037 kernel: [drm] number of cap sets: 0
Dec  2 19:19:29 np0005543037 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  2 19:19:29 np0005543037 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  2 19:19:29 np0005543037 kernel: Console: switching to colour frame buffer device 128x48
Dec  2 19:19:29 np0005543037 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  2 19:19:29 np0005543037 iptables.init[791]: iptables: Applying firewall rules: [  OK  ]
Dec  2 19:19:29 np0005543037 systemd[1]: Finished IPv4 firewall with iptables.
Dec  2 19:19:29 np0005543037 cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 03 Dec 2025 00:19:29 +0000. Up 9.63 seconds.
Dec  2 19:19:30 np0005543037 systemd[1]: run-cloud\x2dinit-tmp-tmp69jigwq1.mount: Deactivated successfully.
Dec  2 19:19:30 np0005543037 systemd[1]: Starting Hostname Service...
Dec  2 19:19:30 np0005543037 systemd[1]: Started Hostname Service.
Dec  2 19:19:30 np0005543037 systemd-hostnamed[856]: Hostname set to <np0005543037.novalocal> (static)
Dec  2 19:19:30 np0005543037 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  2 19:19:30 np0005543037 systemd[1]: Reached target Preparation for Network.
Dec  2 19:19:30 np0005543037 systemd[1]: Starting Network Manager...
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.6563] NetworkManager (version 1.54.1-1.el9) is starting... (boot:ea2ffd2b-9398-4d40-9798-3e760752a119)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.6570] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.6665] manager[0x55d390a2c080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.6727] hostname: hostname: using hostnamed
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.6730] hostname: static hostname changed from (none) to "np0005543037.novalocal"
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.6736] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7009] manager[0x55d390a2c080]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7012] manager[0x55d390a2c080]: rfkill: WWAN hardware radio set enabled
Dec  2 19:19:30 np0005543037 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7075] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7075] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7113] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7115] manager: Networking is enabled by state file
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7120] settings: Loaded settings plugin: keyfile (internal)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7134] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7171] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7193] dhcp: init: Using DHCP client 'internal'
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7199] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7226] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7243] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7258] device (lo): Activation: starting connection 'lo' (3c357ba2-4585-405b-8323-b1feb378cf6e)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7277] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7284] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:19:30 np0005543037 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7366] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 19:19:30 np0005543037 systemd[1]: Started Network Manager.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7396] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7401] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 19:19:30 np0005543037 systemd[1]: Reached target Network.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7429] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7434] device (eth0): carrier: link connected
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7441] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 19:19:30 np0005543037 systemd[1]: Starting Network Manager Wait Online...
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7453] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7490] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7496] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7498] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7502] manager: NetworkManager state is now CONNECTING
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7504] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:19:30 np0005543037 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7537] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:19:30 np0005543037 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7584] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7599] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7614] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7622] device (lo): Activation: successful, device activated.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7650] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7658] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7682] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:19:30 np0005543037 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  2 19:19:30 np0005543037 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7725] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:19:30 np0005543037 systemd[1]: Reached target NFS client services.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7744] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:19:30 np0005543037 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7748] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 19:19:30 np0005543037 systemd[1]: Reached target Remote File Systems.
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7790] device (eth0): Activation: successful, device activated.
Dec  2 19:19:30 np0005543037 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7795] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 19:19:30 np0005543037 NetworkManager[860]: <info>  [1764721170.7797] manager: startup complete
Dec  2 19:19:30 np0005543037 systemd[1]: Finished Network Manager Wait Online.
Dec  2 19:19:30 np0005543037 systemd[1]: Starting Cloud-init: Network Stage...
Dec  2 19:19:31 np0005543037 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 03 Dec 2025 00:19:31 +0000. Up 10.79 seconds.
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.36         | 255.255.255.0 | global | fa:16:3e:ce:79:91 |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fece:7991/64 |       .       |  link  | fa:16:3e:ce:79:91 |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  2 19:19:31 np0005543037 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  2 19:19:32 np0005543037 cloud-init[921]: Generating public/private rsa key pair.
Dec  2 19:19:32 np0005543037 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  2 19:19:32 np0005543037 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  2 19:19:32 np0005543037 cloud-init[921]: The key fingerprint is:
Dec  2 19:19:32 np0005543037 cloud-init[921]: SHA256:dPDuaZkY0Hen/1J6vLkN7MwQMhDRNHsDW0aT172fDlI root@np0005543037.novalocal
Dec  2 19:19:32 np0005543037 cloud-init[921]: The key's randomart image is:
Dec  2 19:19:32 np0005543037 cloud-init[921]: +---[RSA 3072]----+
Dec  2 19:19:32 np0005543037 cloud-init[921]: |        +++.=. ..|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |       . +.B... o|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |      . + * +.. .|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |       o = o E . |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |        S + +   o|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |         + B = .o|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |        . * o *+ |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |         .   =o+=|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |              +*=|
Dec  2 19:19:32 np0005543037 cloud-init[921]: +----[SHA256]-----+
Dec  2 19:19:32 np0005543037 cloud-init[921]: Generating public/private ecdsa key pair.
Dec  2 19:19:32 np0005543037 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  2 19:19:32 np0005543037 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  2 19:19:32 np0005543037 cloud-init[921]: The key fingerprint is:
Dec  2 19:19:32 np0005543037 cloud-init[921]: SHA256:dWoIifJ31OIrZUzznfO9wx63pVUXcHxZWhOkEbFoxQA root@np0005543037.novalocal
Dec  2 19:19:32 np0005543037 cloud-init[921]: The key's randomart image is:
Dec  2 19:19:32 np0005543037 cloud-init[921]: +---[ECDSA 256]---+
Dec  2 19:19:32 np0005543037 cloud-init[921]: |         E..o*==*|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |     . . .  o.*=o|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |  . . o = oo.o...|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |   o   * *.+ .  .|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |    . . S + +   o|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |     . + o   o .o|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |      . .     o.=|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |       .       =*|
Dec  2 19:19:32 np0005543037 cloud-init[921]: |              o+.|
Dec  2 19:19:32 np0005543037 cloud-init[921]: +----[SHA256]-----+
Dec  2 19:19:32 np0005543037 cloud-init[921]: Generating public/private ed25519 key pair.
Dec  2 19:19:32 np0005543037 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  2 19:19:32 np0005543037 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  2 19:19:32 np0005543037 cloud-init[921]: The key fingerprint is:
Dec  2 19:19:32 np0005543037 cloud-init[921]: SHA256:MaCSeqWk4+hqxePgtr62EbNt8KngMYsqlCAR6580N/I root@np0005543037.novalocal
Dec  2 19:19:32 np0005543037 cloud-init[921]: The key's randomart image is:
Dec  2 19:19:32 np0005543037 cloud-init[921]: +--[ED25519 256]--+
Dec  2 19:19:32 np0005543037 cloud-init[921]: |..    .          |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |.. . . .         |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |..+ o   o        |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |++ +     o       |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |*==+ o  S        |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |o*X+B .          |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |=*+B.E           |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |*=B.             |
Dec  2 19:19:32 np0005543037 cloud-init[921]: |%X+              |
Dec  2 19:19:32 np0005543037 cloud-init[921]: +----[SHA256]-----+
Dec  2 19:19:32 np0005543037 systemd[1]: Finished Cloud-init: Network Stage.
Dec  2 19:19:32 np0005543037 systemd[1]: Reached target Cloud-config availability.
Dec  2 19:19:32 np0005543037 systemd[1]: Reached target Network is Online.
Dec  2 19:19:32 np0005543037 systemd[1]: Starting Cloud-init: Config Stage...
Dec  2 19:19:32 np0005543037 systemd[1]: Starting Crash recovery kernel arming...
Dec  2 19:19:32 np0005543037 systemd[1]: Starting Notify NFS peers of a restart...
Dec  2 19:19:32 np0005543037 systemd[1]: Starting System Logging Service...
Dec  2 19:19:32 np0005543037 sm-notify[1003]: Version 2.5.4 starting
Dec  2 19:19:32 np0005543037 systemd[1]: Starting OpenSSH server daemon...
Dec  2 19:19:32 np0005543037 systemd[1]: Starting Permit User Sessions...
Dec  2 19:19:32 np0005543037 systemd[1]: Started Notify NFS peers of a restart.
Dec  2 19:19:32 np0005543037 systemd[1]: Finished Permit User Sessions.
Dec  2 19:19:32 np0005543037 systemd[1]: Started Command Scheduler.
Dec  2 19:19:32 np0005543037 systemd[1]: Started Getty on tty1.
Dec  2 19:19:32 np0005543037 systemd[1]: Started Serial Getty on ttyS0.
Dec  2 19:19:32 np0005543037 systemd[1]: Reached target Login Prompts.
Dec  2 19:19:32 np0005543037 systemd[1]: Started OpenSSH server daemon.
Dec  2 19:19:32 np0005543037 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec  2 19:19:32 np0005543037 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  2 19:19:32 np0005543037 systemd[1]: Started System Logging Service.
Dec  2 19:19:32 np0005543037 systemd[1]: Reached target Multi-User System.
Dec  2 19:19:32 np0005543037 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  2 19:19:32 np0005543037 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  2 19:19:32 np0005543037 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  2 19:19:32 np0005543037 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 19:19:32 np0005543037 kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Dec  2 19:19:32 np0005543037 kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  2 19:19:32 np0005543037 cloud-init[1130]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 03 Dec 2025 00:19:32 +0000. Up 12.55 seconds.
Dec  2 19:19:33 np0005543037 systemd[1]: Finished Cloud-init: Config Stage.
Dec  2 19:19:33 np0005543037 systemd[1]: Starting Cloud-init: Final Stage...
Dec  2 19:19:33 np0005543037 dracut[1284]: dracut-057-102.git20250818.el9
Dec  2 19:19:33 np0005543037 cloud-init[1287]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 03 Dec 2025 00:19:33 +0000. Up 12.98 seconds.
Dec  2 19:19:33 np0005543037 cloud-init[1302]: #############################################################
Dec  2 19:19:33 np0005543037 cloud-init[1303]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  2 19:19:33 np0005543037 cloud-init[1305]: 256 SHA256:dWoIifJ31OIrZUzznfO9wx63pVUXcHxZWhOkEbFoxQA root@np0005543037.novalocal (ECDSA)
Dec  2 19:19:33 np0005543037 cloud-init[1307]: 256 SHA256:MaCSeqWk4+hqxePgtr62EbNt8KngMYsqlCAR6580N/I root@np0005543037.novalocal (ED25519)
Dec  2 19:19:33 np0005543037 cloud-init[1309]: 3072 SHA256:dPDuaZkY0Hen/1J6vLkN7MwQMhDRNHsDW0aT172fDlI root@np0005543037.novalocal (RSA)
Dec  2 19:19:33 np0005543037 cloud-init[1310]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  2 19:19:33 np0005543037 cloud-init[1311]: #############################################################
Dec  2 19:19:33 np0005543037 dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  2 19:19:33 np0005543037 cloud-init[1287]: Cloud-init v. 24.4-7.el9 finished at Wed, 03 Dec 2025 00:19:33 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.17 seconds
Dec  2 19:19:33 np0005543037 systemd[1]: Finished Cloud-init: Final Stage.
Dec  2 19:19:33 np0005543037 systemd[1]: Reached target Cloud-init target.
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: memstrack is not available
Dec  2 19:19:34 np0005543037 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  2 19:19:34 np0005543037 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  2 19:19:35 np0005543037 dracut[1286]: memstrack is not available
Dec  2 19:19:35 np0005543037 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  2 19:19:35 np0005543037 dracut[1286]: *** Including module: systemd ***
Dec  2 19:19:35 np0005543037 dracut[1286]: *** Including module: fips ***
Dec  2 19:19:35 np0005543037 chronyd[799]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec  2 19:19:35 np0005543037 chronyd[799]: System clock TAI offset set to 37 seconds
Dec  2 19:19:36 np0005543037 dracut[1286]: *** Including module: systemd-initrd ***
Dec  2 19:19:36 np0005543037 dracut[1286]: *** Including module: i18n ***
Dec  2 19:19:36 np0005543037 dracut[1286]: *** Including module: drm ***
Dec  2 19:19:37 np0005543037 dracut[1286]: *** Including module: prefixdevname ***
Dec  2 19:19:37 np0005543037 dracut[1286]: *** Including module: kernel-modules ***
Dec  2 19:19:37 np0005543037 kernel: block vda: the capability attribute has been deprecated.
Dec  2 19:19:37 np0005543037 dracut[1286]: *** Including module: kernel-modules-extra ***
Dec  2 19:19:37 np0005543037 dracut[1286]: *** Including module: qemu ***
Dec  2 19:19:37 np0005543037 dracut[1286]: *** Including module: fstab-sys ***
Dec  2 19:19:37 np0005543037 dracut[1286]: *** Including module: rootfs-block ***
Dec  2 19:19:37 np0005543037 chronyd[799]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec  2 19:19:37 np0005543037 dracut[1286]: *** Including module: terminfo ***
Dec  2 19:19:38 np0005543037 dracut[1286]: *** Including module: udev-rules ***
Dec  2 19:19:38 np0005543037 dracut[1286]: Skipping udev rule: 91-permissions.rules
Dec  2 19:19:38 np0005543037 dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  2 19:19:38 np0005543037 dracut[1286]: *** Including module: virtiofs ***
Dec  2 19:19:38 np0005543037 dracut[1286]: *** Including module: dracut-systemd ***
Dec  2 19:19:39 np0005543037 dracut[1286]: *** Including module: usrmount ***
Dec  2 19:19:39 np0005543037 dracut[1286]: *** Including module: base ***
Dec  2 19:19:39 np0005543037 dracut[1286]: *** Including module: fs-lib ***
Dec  2 19:19:39 np0005543037 dracut[1286]: *** Including module: kdumpbase ***
Dec  2 19:19:39 np0005543037 dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  2 19:19:39 np0005543037 dracut[1286]:  microcode_ctl module: mangling fw_dir
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel" is ignored
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  2 19:19:39 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  2 19:19:40 np0005543037 dracut[1286]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  2 19:19:40 np0005543037 dracut[1286]: *** Including module: openssl ***
Dec  2 19:19:40 np0005543037 dracut[1286]: *** Including module: shutdown ***
Dec  2 19:19:40 np0005543037 irqbalance[792]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  2 19:19:40 np0005543037 irqbalance[792]: IRQ 25 affinity is now unmanaged
Dec  2 19:19:40 np0005543037 irqbalance[792]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  2 19:19:40 np0005543037 irqbalance[792]: IRQ 31 affinity is now unmanaged
Dec  2 19:19:40 np0005543037 irqbalance[792]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  2 19:19:40 np0005543037 irqbalance[792]: IRQ 28 affinity is now unmanaged
Dec  2 19:19:40 np0005543037 irqbalance[792]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  2 19:19:40 np0005543037 irqbalance[792]: IRQ 32 affinity is now unmanaged
Dec  2 19:19:40 np0005543037 irqbalance[792]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  2 19:19:40 np0005543037 irqbalance[792]: IRQ 30 affinity is now unmanaged
Dec  2 19:19:40 np0005543037 irqbalance[792]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  2 19:19:40 np0005543037 irqbalance[792]: IRQ 29 affinity is now unmanaged
Dec  2 19:19:40 np0005543037 dracut[1286]: *** Including module: squash ***
Dec  2 19:19:40 np0005543037 dracut[1286]: *** Including modules done ***
Dec  2 19:19:40 np0005543037 dracut[1286]: *** Installing kernel module dependencies ***
Dec  2 19:19:40 np0005543037 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 19:19:41 np0005543037 dracut[1286]: *** Installing kernel module dependencies done ***
Dec  2 19:19:41 np0005543037 dracut[1286]: *** Resolving executable dependencies ***
Dec  2 19:19:43 np0005543037 dracut[1286]: *** Resolving executable dependencies done ***
Dec  2 19:19:43 np0005543037 dracut[1286]: *** Generating early-microcode cpio image ***
Dec  2 19:19:43 np0005543037 dracut[1286]: *** Store current command line parameters ***
Dec  2 19:19:43 np0005543037 dracut[1286]: Stored kernel commandline:
Dec  2 19:19:43 np0005543037 dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Dec  2 19:19:43 np0005543037 dracut[1286]: *** Install squash loader ***
Dec  2 19:19:44 np0005543037 dracut[1286]: *** Squashing the files inside the initramfs ***
Dec  2 19:19:45 np0005543037 dracut[1286]: *** Squashing the files inside the initramfs done ***
Dec  2 19:19:45 np0005543037 dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  2 19:19:45 np0005543037 dracut[1286]: *** Hardlinking files ***
Dec  2 19:19:45 np0005543037 dracut[1286]: *** Hardlinking files done ***
Dec  2 19:19:45 np0005543037 dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  2 19:19:46 np0005543037 kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Dec  2 19:19:46 np0005543037 kdumpctl[1016]: kdump: Starting kdump: [OK]
Dec  2 19:19:46 np0005543037 systemd[1]: Finished Crash recovery kernel arming.
Dec  2 19:19:46 np0005543037 systemd[1]: Startup finished in 3.745s (kernel) + 3.178s (initrd) + 19.458s (userspace) = 26.383s.
Dec  2 19:19:56 np0005543037 systemd[1]: Created slice User Slice of UID 1000.
Dec  2 19:19:56 np0005543037 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  2 19:19:56 np0005543037 systemd-logind[800]: New session 1 of user zuul.
Dec  2 19:19:56 np0005543037 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  2 19:19:56 np0005543037 systemd[1]: Starting User Manager for UID 1000...
Dec  2 19:19:56 np0005543037 systemd[4297]: Queued start job for default target Main User Target.
Dec  2 19:19:56 np0005543037 systemd[4297]: Created slice User Application Slice.
Dec  2 19:19:56 np0005543037 systemd[4297]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  2 19:19:56 np0005543037 systemd[4297]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 19:19:56 np0005543037 systemd[4297]: Reached target Paths.
Dec  2 19:19:56 np0005543037 systemd[4297]: Reached target Timers.
Dec  2 19:19:56 np0005543037 systemd[4297]: Starting D-Bus User Message Bus Socket...
Dec  2 19:19:56 np0005543037 systemd[4297]: Starting Create User's Volatile Files and Directories...
Dec  2 19:19:56 np0005543037 systemd[4297]: Listening on D-Bus User Message Bus Socket.
Dec  2 19:19:56 np0005543037 systemd[4297]: Reached target Sockets.
Dec  2 19:19:56 np0005543037 systemd[4297]: Finished Create User's Volatile Files and Directories.
Dec  2 19:19:56 np0005543037 systemd[4297]: Reached target Basic System.
Dec  2 19:19:56 np0005543037 systemd[4297]: Reached target Main User Target.
Dec  2 19:19:56 np0005543037 systemd[4297]: Startup finished in 157ms.
Dec  2 19:19:56 np0005543037 systemd[1]: Started User Manager for UID 1000.
Dec  2 19:19:56 np0005543037 systemd[1]: Started Session 1 of User zuul.
Dec  2 19:19:56 np0005543037 python3[4379]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:19:59 np0005543037 python3[4407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:20:00 np0005543037 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 19:20:05 np0005543037 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:20:06 np0005543037 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  2 19:20:08 np0005543037 python3[4534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG1k3FGoeVRMi/pfqrqG+qYYbC3MHtyLyiSp+H1do1NUQw/Cg+wJJM9AY6SS30BRejfYeXmkEkXZhV8wBMXnVt9ot3DIJsYyFguzPUpBwm+dalGcqkaCwbE8oDxsrdeCCsXql6RrzRVh7SgNQfv6SiXU0RiXzN+6k535cBJGIOQoZy5yrkFFSqOoYGS8YY+3lq8NaHwsOn29bQCSd9+kxEOPMuEcoDJqy1nkNQ7ZgiCFfDkBa6Q7ODBFFl+BxSnhWQ6lWCnxYeIW1Br443YlF9LZB5t0bvGwfcxVO6u0AlkKaRayBqCVaC+OI9Ctyyrxo1/qd9tPuSIqrj5/mDZs8bOpuJTs9Ns6Sj101LzBe5Nmix/JPI09Q/5jwbKNupQI20OGkcCxu/TW4GjFzKjTzgbAuxiqfYVI/CIqZLzy4CJnhRR8O2SvkIpGgQ+O38P7YTwfvxb8siGkcOiJFj5Tf5seM1Fb5b18+PRSByPJa1E2UImhmHtdVaFSXYRivijF0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:09 np0005543037 python3[4558]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:09 np0005543037 python3[4657]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:20:10 np0005543037 python3[4728]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764721209.4807749-207-279452062333833/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0b7e10745d7f46dea4defb7b84db63f6_id_rsa follow=False checksum=2cedf01d49111e70ff95fb7d4c891ab5a13b2d0e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:10 np0005543037 python3[4851]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:20:11 np0005543037 python3[4922]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764721210.4663303-240-164617209161887/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0b7e10745d7f46dea4defb7b84db63f6_id_rsa.pub follow=False checksum=d7776b431cdb32c605b679e672f334972862f140 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:12 np0005543037 python3[4970]: ansible-ping Invoked with data=pong
Dec  2 19:20:13 np0005543037 python3[4994]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:20:15 np0005543037 python3[5052]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  2 19:20:16 np0005543037 python3[5084]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:17 np0005543037 python3[5108]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:17 np0005543037 python3[5132]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:17 np0005543037 python3[5156]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:17 np0005543037 python3[5180]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:18 np0005543037 python3[5204]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:19 np0005543037 python3[5230]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:20 np0005543037 python3[5308]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:20:20 np0005543037 python3[5381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721219.9602287-21-88994690975283/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:21 np0005543037 python3[5429]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:22 np0005543037 python3[5453]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:22 np0005543037 python3[5477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:22 np0005543037 python3[5501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:22 np0005543037 python3[5525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:23 np0005543037 python3[5549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:23 np0005543037 python3[5573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:23 np0005543037 python3[5597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:24 np0005543037 python3[5621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:24 np0005543037 python3[5645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:24 np0005543037 python3[5669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:25 np0005543037 python3[5693]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:25 np0005543037 python3[5717]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:25 np0005543037 python3[5741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:25 np0005543037 python3[5765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:26 np0005543037 python3[5789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:26 np0005543037 python3[5813]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:26 np0005543037 python3[5837]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:27 np0005543037 python3[5861]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:27 np0005543037 python3[5885]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:27 np0005543037 python3[5909]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:28 np0005543037 python3[5933]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:28 np0005543037 python3[5957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:28 np0005543037 python3[5981]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:29 np0005543037 python3[6005]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:29 np0005543037 python3[6029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:20:31 np0005543037 python3[6055]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  2 19:20:31 np0005543037 systemd[1]: Starting Time & Date Service...
Dec  2 19:20:31 np0005543037 systemd[1]: Started Time & Date Service.
Dec  2 19:20:31 np0005543037 systemd-timedated[6057]: Changed time zone to 'UTC' (UTC).
Dec  2 19:20:31 np0005543037 python3[6086]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:32 np0005543037 python3[6162]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:20:32 np0005543037 python3[6233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764721232.08354-153-23865397437081/source _original_basename=tmpbk5lman3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:33 np0005543037 python3[6333]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:20:33 np0005543037 python3[6404]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764721233.0582342-183-17510444778160/source _original_basename=tmprgnollya follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:34 np0005543037 python3[6506]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:20:34 np0005543037 python3[6579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764721234.1862042-231-177289158155994/source _original_basename=tmph6qnordz follow=False checksum=46b5d7337244bac6339155768dd5768694f5a0e7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:35 np0005543037 python3[6627]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:20:35 np0005543037 python3[6653]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:20:36 np0005543037 python3[6733]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:20:36 np0005543037 python3[6806]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721235.9555855-273-68918345348552/source _original_basename=tmpdbgcena9 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:37 np0005543037 python3[6857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-fdf3-80a0-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:20:38 np0005543037 python3[6885]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-fdf3-80a0-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  2 19:20:39 np0005543037 python3[6913]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:20:56 np0005543037 python3[6939]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:21:01 np0005543037 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  2 19:21:30 np0005543037 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  2 19:21:30 np0005543037 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.6964] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 19:21:30 np0005543037 systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7137] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7158] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7161] device (eth1): carrier: link connected
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7163] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7167] policy: auto-activating connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab)
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7170] device (eth1): Activation: starting connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab)
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7171] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7173] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7176] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:21:30 np0005543037 NetworkManager[860]: <info>  [1764721290.7179] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:21:31 np0005543037 python3[6969]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-e99c-3723-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:21:42 np0005543037 python3[7049]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:21:42 np0005543037 python3[7122]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764721301.6832855-102-198149725575844/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=38f7cf41e66e471640785a9c339c05f31751f1d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:21:43 np0005543037 python3[7172]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 19:21:43 np0005543037 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  2 19:21:43 np0005543037 systemd[1]: Stopped Network Manager Wait Online.
Dec  2 19:21:43 np0005543037 systemd[1]: Stopping Network Manager Wait Online...
Dec  2 19:21:43 np0005543037 systemd[1]: Stopping Network Manager...
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4740] caught SIGTERM, shutting down normally.
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4749] dhcp4 (eth0): canceled DHCP transaction
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4749] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4749] dhcp4 (eth0): state changed no lease
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4751] manager: NetworkManager state is now CONNECTING
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4806] dhcp4 (eth1): canceled DHCP transaction
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4807] dhcp4 (eth1): state changed no lease
Dec  2 19:21:43 np0005543037 NetworkManager[860]: <info>  [1764721303.4876] exiting (success)
Dec  2 19:21:43 np0005543037 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 19:21:43 np0005543037 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  2 19:21:43 np0005543037 systemd[1]: Stopped Network Manager.
Dec  2 19:21:43 np0005543037 systemd[1]: NetworkManager.service: Consumed 1.123s CPU time, 9.9M memory peak.
Dec  2 19:21:43 np0005543037 systemd[1]: Starting Network Manager...
Dec  2 19:21:43 np0005543037 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.5585] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:ea2ffd2b-9398-4d40-9798-3e760752a119)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.5588] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.5666] manager[0x558380dc1070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 19:21:43 np0005543037 systemd[1]: Starting Hostname Service...
Dec  2 19:21:43 np0005543037 systemd[1]: Started Hostname Service.
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6844] hostname: hostname: using hostnamed
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6850] hostname: static hostname changed from (none) to "np0005543037.novalocal"
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6860] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6872] manager[0x558380dc1070]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6872] manager[0x558380dc1070]: rfkill: WWAN hardware radio set enabled
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6924] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6924] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6926] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6926] manager: Networking is enabled by state file
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6930] settings: Loaded settings plugin: keyfile (internal)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6938] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6979] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.6996] dhcp: init: Using DHCP client 'internal'
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7000] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7009] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7018] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7031] device (lo): Activation: starting connection 'lo' (3c357ba2-4585-405b-8323-b1feb378cf6e)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7044] device (eth0): carrier: link connected
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7052] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7061] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7062] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7073] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7084] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7096] device (eth1): carrier: link connected
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7103] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7112] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab) (indicated)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7112] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7123] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7133] device (eth1): Activation: starting connection 'Wired connection 1' (1b189b81-0918-3f63-b174-3141827cccab)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7144] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 19:21:43 np0005543037 systemd[1]: Started Network Manager.
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7151] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7155] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7159] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7163] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7168] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7172] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7176] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7181] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7192] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7197] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7212] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7216] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7244] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7253] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7266] device (lo): Activation: successful, device activated.
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7281] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7295] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 19:21:43 np0005543037 systemd[1]: Starting Network Manager Wait Online...
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7408] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7440] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7443] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7450] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7457] device (eth0): Activation: successful, device activated.
Dec  2 19:21:43 np0005543037 NetworkManager[7177]: <info>  [1764721303.7467] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 19:21:44 np0005543037 python3[7256]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-e99c-3723-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:21:53 np0005543037 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 19:22:13 np0005543037 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3284] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 19:22:29 np0005543037 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 19:22:29 np0005543037 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3828] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3832] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3845] device (eth1): Activation: successful, device activated.
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3851] manager: startup complete
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3857] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <warn>  [1764721349.3866] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3883] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  2 19:22:29 np0005543037 systemd[1]: Finished Network Manager Wait Online.
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3976] dhcp4 (eth1): canceled DHCP transaction
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3977] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3977] dhcp4 (eth1): state changed no lease
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3993] policy: auto-activating connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3997] device (eth1): Activation: starting connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.3998] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.4001] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.4009] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.4020] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.4066] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.4068] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:22:29 np0005543037 NetworkManager[7177]: <info>  [1764721349.4077] device (eth1): Activation: successful, device activated.
Dec  2 19:22:36 np0005543037 systemd[4297]: Starting Mark boot as successful...
Dec  2 19:22:36 np0005543037 systemd[4297]: Finished Mark boot as successful.
Dec  2 19:22:39 np0005543037 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 19:22:43 np0005543037 python3[7362]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:22:43 np0005543037 python3[7435]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721363.111369-267-85282930200528/source _original_basename=tmp532lk67c follow=False checksum=0a83b367ded148c485f0eee55c6073e307d79589 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:23:44 np0005543037 systemd-logind[800]: Session 1 logged out. Waiting for processes to exit.
Dec  2 19:25:36 np0005543037 systemd[4297]: Created slice User Background Tasks Slice.
Dec  2 19:25:36 np0005543037 systemd[4297]: Starting Cleanup of User's Temporary Files and Directories...
Dec  2 19:25:36 np0005543037 systemd[4297]: Finished Cleanup of User's Temporary Files and Directories.
Dec  2 19:30:21 np0005543037 systemd-logind[800]: New session 3 of user zuul.
Dec  2 19:30:21 np0005543037 systemd[1]: Started Session 3 of User zuul.
Dec  2 19:30:21 np0005543037 python3[7499]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-b657-a2e0-000000001cd8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:30:21 np0005543037 python3[7528]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:30:21 np0005543037 python3[7554]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:30:22 np0005543037 python3[7580]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:30:22 np0005543037 python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:30:23 np0005543037 python3[7633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:30:23 np0005543037 python3[7711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:30:24 np0005543037 python3[7784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721823.413551-478-262932798804482/source _original_basename=tmpn6z68h0n follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:30:25 np0005543037 python3[7834]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 19:30:25 np0005543037 systemd[1]: Reloading.
Dec  2 19:30:25 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:30:26 np0005543037 python3[7889]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  2 19:30:27 np0005543037 python3[7915]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:30:27 np0005543037 python3[7943]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:30:27 np0005543037 python3[7971]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:30:27 np0005543037 python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:30:28 np0005543037 python3[8026]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-b657-a2e0-000000001cdf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:30:29 np0005543037 python3[8056]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  2 19:30:31 np0005543037 systemd[1]: session-3.scope: Deactivated successfully.
Dec  2 19:30:31 np0005543037 systemd[1]: session-3.scope: Consumed 4.497s CPU time.
Dec  2 19:30:31 np0005543037 systemd-logind[800]: Session 3 logged out. Waiting for processes to exit.
Dec  2 19:30:31 np0005543037 systemd-logind[800]: Removed session 3.
Dec  2 19:30:32 np0005543037 systemd-logind[800]: New session 4 of user zuul.
Dec  2 19:30:32 np0005543037 systemd[1]: Started Session 4 of User zuul.
Dec  2 19:30:33 np0005543037 python3[8092]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  2 19:30:45 np0005543037 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 19:30:45 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:30:45 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:30:45 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:30:45 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:30:45 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:30:45 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:30:45 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:30:54 np0005543037 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 19:30:54 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:30:54 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:30:54 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:30:54 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:30:54 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:30:54 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:30:54 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:31:02 np0005543037 kernel: SELinux:  Converting 385 SID table entries...
Dec  2 19:31:03 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:31:03 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:31:03 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:31:03 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:31:03 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:31:03 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:31:03 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:31:04 np0005543037 setsebool[8154]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  2 19:31:04 np0005543037 setsebool[8154]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  2 19:31:15 np0005543037 kernel: SELinux:  Converting 388 SID table entries...
Dec  2 19:31:15 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:31:15 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:31:15 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:31:15 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:31:15 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:31:15 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:31:15 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:31:33 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  2 19:31:33 np0005543037 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 19:31:33 np0005543037 systemd[1]: Starting man-db-cache-update.service...
Dec  2 19:31:33 np0005543037 systemd[1]: Reloading.
Dec  2 19:31:33 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:31:33 np0005543037 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 19:31:51 np0005543037 python3[17533]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4747-1035-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:31:52 np0005543037 kernel: evm: overlay not supported
Dec  2 19:31:52 np0005543037 systemd[4297]: Starting D-Bus User Message Bus...
Dec  2 19:31:52 np0005543037 dbus-broker-launch[17947]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  2 19:31:52 np0005543037 dbus-broker-launch[17947]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  2 19:31:52 np0005543037 systemd[4297]: Started D-Bus User Message Bus.
Dec  2 19:31:52 np0005543037 dbus-broker-lau[17947]: Ready
Dec  2 19:31:52 np0005543037 systemd[4297]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  2 19:31:52 np0005543037 systemd[4297]: Created slice Slice /user.
Dec  2 19:31:52 np0005543037 systemd[4297]: podman-17869.scope: unit configures an IP firewall, but not running as root.
Dec  2 19:31:52 np0005543037 systemd[4297]: (This warning is only shown for the first unit using IP firewalling.)
Dec  2 19:31:52 np0005543037 systemd[4297]: Started podman-17869.scope.
Dec  2 19:31:53 np0005543037 systemd[4297]: Started podman-pause-50584a2c.scope.
Dec  2 19:31:53 np0005543037 python3[18286]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.150:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.150:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:31:53 np0005543037 python3[18286]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  2 19:31:54 np0005543037 systemd[1]: session-4.scope: Deactivated successfully.
Dec  2 19:31:54 np0005543037 systemd[1]: session-4.scope: Consumed 59.881s CPU time.
Dec  2 19:31:54 np0005543037 systemd-logind[800]: Session 4 logged out. Waiting for processes to exit.
Dec  2 19:31:54 np0005543037 systemd-logind[800]: Removed session 4.
Dec  2 19:32:00 np0005543037 irqbalance[792]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  2 19:32:00 np0005543037 irqbalance[792]: IRQ 26 affinity is now unmanaged
Dec  2 19:32:16 np0005543037 systemd-logind[800]: New session 5 of user zuul.
Dec  2 19:32:16 np0005543037 systemd[1]: Started Session 5 of User zuul.
Dec  2 19:32:17 np0005543037 python3[26256]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnvzaEJLazr+2A79QTuOg+l8N6rmlNU2AOwt8CCoTWkRPJADFYMQvyUy0SivCzoispoNUuXX55+VlwUbR0W2Pk= zuul@np0005543036.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:32:17 np0005543037 python3[26440]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnvzaEJLazr+2A79QTuOg+l8N6rmlNU2AOwt8CCoTWkRPJADFYMQvyUy0SivCzoispoNUuXX55+VlwUbR0W2Pk= zuul@np0005543036.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:32:18 np0005543037 python3[26811]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005543037.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  2 19:32:19 np0005543037 python3[27021]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnvzaEJLazr+2A79QTuOg+l8N6rmlNU2AOwt8CCoTWkRPJADFYMQvyUy0SivCzoispoNUuXX55+VlwUbR0W2Pk= zuul@np0005543036.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  2 19:32:19 np0005543037 python3[27289]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:32:20 np0005543037 python3[27537]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764721939.198401-135-122839661404084/source _original_basename=tmp9pgmqr2u follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:32:21 np0005543037 python3[27859]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  2 19:32:21 np0005543037 systemd[1]: Starting Hostname Service...
Dec  2 19:32:21 np0005543037 systemd[1]: Started Hostname Service.
Dec  2 19:32:21 np0005543037 systemd-hostnamed[27958]: Changed pretty hostname to 'compute-0'
Dec  2 19:32:21 np0005543037 systemd-hostnamed[27958]: Hostname set to <compute-0> (static)
Dec  2 19:32:21 np0005543037 NetworkManager[7177]: <info>  [1764721941.2338] hostname: static hostname changed from "np0005543037.novalocal" to "compute-0"
Dec  2 19:32:21 np0005543037 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 19:32:21 np0005543037 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 19:32:21 np0005543037 systemd[1]: session-5.scope: Deactivated successfully.
Dec  2 19:32:21 np0005543037 systemd[1]: session-5.scope: Consumed 2.716s CPU time.
Dec  2 19:32:21 np0005543037 systemd-logind[800]: Session 5 logged out. Waiting for processes to exit.
Dec  2 19:32:21 np0005543037 systemd-logind[800]: Removed session 5.
Dec  2 19:32:27 np0005543037 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 19:32:27 np0005543037 systemd[1]: Finished man-db-cache-update.service.
Dec  2 19:32:27 np0005543037 systemd[1]: man-db-cache-update.service: Consumed 1min 6.686s CPU time.
Dec  2 19:32:27 np0005543037 systemd[1]: run-r8faee6dcbb17464395551d111f73ade5.service: Deactivated successfully.
Dec  2 19:32:31 np0005543037 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 19:32:51 np0005543037 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 19:34:26 np0005543037 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  2 19:34:26 np0005543037 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  2 19:34:26 np0005543037 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  2 19:34:26 np0005543037 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  2 19:37:48 np0005543037 systemd-logind[800]: New session 6 of user zuul.
Dec  2 19:37:48 np0005543037 systemd[1]: Started Session 6 of User zuul.
Dec  2 19:37:49 np0005543037 python3[29995]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:37:51 np0005543037 python3[30111]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:37:51 np0005543037 python3[30184]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:37:51 np0005543037 python3[30210]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:37:52 np0005543037 python3[30283]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:37:52 np0005543037 python3[30309]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:37:52 np0005543037 python3[30382]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:37:53 np0005543037 python3[30408]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:37:53 np0005543037 python3[30481]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:37:53 np0005543037 python3[30507]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:37:54 np0005543037 python3[30580]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:37:54 np0005543037 python3[30606]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:37:55 np0005543037 python3[30679]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:37:55 np0005543037 python3[30705]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  2 19:37:55 np0005543037 python3[30778]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764722270.7212627-33630-205516411691382/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:41:13 np0005543037 python3[30841]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:46:13 np0005543037 systemd-logind[800]: Session 6 logged out. Waiting for processes to exit.
Dec  2 19:46:13 np0005543037 systemd[1]: session-6.scope: Deactivated successfully.
Dec  2 19:46:13 np0005543037 systemd[1]: session-6.scope: Consumed 5.778s CPU time.
Dec  2 19:46:13 np0005543037 systemd-logind[800]: Removed session 6.
Dec  2 19:54:40 np0005543037 systemd-logind[800]: New session 7 of user zuul.
Dec  2 19:54:40 np0005543037 systemd[1]: Started Session 7 of User zuul.
Dec  2 19:54:41 np0005543037 python3.9[31006]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:54:44 np0005543037 python3.9[31188]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:54:51 np0005543037 systemd[1]: session-7.scope: Deactivated successfully.
Dec  2 19:54:51 np0005543037 systemd[1]: session-7.scope: Consumed 8.215s CPU time.
Dec  2 19:54:51 np0005543037 systemd-logind[800]: Session 7 logged out. Waiting for processes to exit.
Dec  2 19:54:51 np0005543037 systemd-logind[800]: Removed session 7.
Dec  2 19:55:07 np0005543037 systemd-logind[800]: New session 8 of user zuul.
Dec  2 19:55:07 np0005543037 systemd[1]: Started Session 8 of User zuul.
Dec  2 19:55:08 np0005543037 python3.9[31398]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  2 19:55:10 np0005543037 python3.9[31572]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:55:11 np0005543037 python3.9[31724]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:55:13 np0005543037 python3.9[31877]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 19:55:14 np0005543037 python3.9[32029]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:55:15 np0005543037 python3.9[32181]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 19:55:15 np0005543037 python3.9[32304]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723314.4881098-73-53510234384744/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:55:16 np0005543037 python3.9[32456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:55:17 np0005543037 python3.9[32612]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 19:55:18 np0005543037 python3.9[32764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 19:55:19 np0005543037 python3.9[32914]: ansible-ansible.builtin.service_facts Invoked
Dec  2 19:55:25 np0005543037 python3.9[33167]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:55:26 np0005543037 python3.9[33317]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:55:27 np0005543037 python3.9[33471]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:55:28 np0005543037 python3.9[33629]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 19:55:29 np0005543037 python3.9[33713]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 19:56:18 np0005543037 systemd[1]: Reloading.
Dec  2 19:56:18 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:56:18 np0005543037 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  2 19:56:18 np0005543037 systemd[1]: Reloading.
Dec  2 19:56:19 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:56:19 np0005543037 systemd[1]: Starting dnf makecache...
Dec  2 19:56:19 np0005543037 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  2 19:56:19 np0005543037 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  2 19:56:19 np0005543037 systemd[1]: Reloading.
Dec  2 19:56:19 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:56:19 np0005543037 dnf[33961]: Failed determining last makecache time.
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-barbican-42b4c41831408a8e323 160 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 192 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-cinder-1c00d6490d88e436f26ef 195 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-python-stevedore-c4acc5639fd2329372142 141 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-python-cloudkitty-tests-tempest-2c80f8 163 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 190 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 178 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-python-designate-tests-tempest-347fdbc 189 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-glance-1fd12c29b339f30fe823e 187 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 147 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-manila-3c01b7181572c95dac462 148 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-python-whitebox-neutron-tests-tempest- 194 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-octavia-ba397f07a7331190208c 176 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-watcher-c014f81a8647287f6dcc 180 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-ansible-config_template-5ccaa22121a7ff 195 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 186 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-swift-dc98a8463506ac520c469a 190 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-python-tempestconf-8515371b7cceebd4282 179 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  2 19:56:19 np0005543037 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  2 19:56:19 np0005543037 dnf[33961]: delorean-openstack-heat-ui-013accbfd179753bc3f0 155 kB/s | 3.0 kB     00:00
Dec  2 19:56:19 np0005543037 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  2 19:56:19 np0005543037 dnf[33961]: CentOS Stream 9 - BaseOS                         59 kB/s | 5.9 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: CentOS Stream 9 - AppStream                      59 kB/s | 6.0 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: CentOS Stream 9 - CRB                            63 kB/s | 5.8 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: CentOS Stream 9 - Extras packages                88 kB/s | 8.3 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: dlrn-antelope-testing                           113 kB/s | 3.0 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: dlrn-antelope-build-deps                        112 kB/s | 3.0 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: centos9-rabbitmq                                 58 kB/s | 3.0 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: centos9-storage                                  89 kB/s | 3.0 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: centos9-opstools                                 91 kB/s | 3.0 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: NFV SIG OpenvSwitch                              58 kB/s | 3.0 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: repo-setup-centos-appstream                     121 kB/s | 4.4 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: repo-setup-centos-baseos                        158 kB/s | 3.9 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: repo-setup-centos-highavailability              175 kB/s | 3.9 kB     00:00
Dec  2 19:56:20 np0005543037 dnf[33961]: repo-setup-centos-powertools                    185 kB/s | 4.3 kB     00:00
Dec  2 19:56:21 np0005543037 dnf[33961]: Extra Packages for Enterprise Linux 9 - x86_64  107 kB/s |  33 kB     00:00
Dec  2 19:56:21 np0005543037 dnf[33961]: Metadata cache created.
Dec  2 19:56:21 np0005543037 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  2 19:56:21 np0005543037 systemd[1]: Finished dnf makecache.
Dec  2 19:56:21 np0005543037 systemd[1]: dnf-makecache.service: Consumed 1.859s CPU time.
Dec  2 19:57:24 np0005543037 kernel: SELinux:  Converting 2718 SID table entries...
Dec  2 19:57:24 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:57:24 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:57:24 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:57:24 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:57:24 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:57:24 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:57:24 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:57:24 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  2 19:57:24 np0005543037 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 19:57:24 np0005543037 systemd[1]: Starting man-db-cache-update.service...
Dec  2 19:57:24 np0005543037 systemd[1]: Reloading.
Dec  2 19:57:24 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:57:25 np0005543037 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 19:57:26 np0005543037 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 19:57:26 np0005543037 systemd[1]: Finished man-db-cache-update.service.
Dec  2 19:57:26 np0005543037 systemd[1]: man-db-cache-update.service: Consumed 1.595s CPU time.
Dec  2 19:57:26 np0005543037 systemd[1]: run-r08757739ca544cf7a70eef4c3fbf367d.service: Deactivated successfully.
Dec  2 19:57:26 np0005543037 python3.9[35267]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:57:28 np0005543037 python3.9[35549]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  2 19:57:29 np0005543037 python3.9[35701]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  2 19:57:31 np0005543037 python3.9[35854]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:57:32 np0005543037 python3.9[36006]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  2 19:57:34 np0005543037 python3.9[36158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 19:57:34 np0005543037 python3.9[36310]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 19:57:35 np0005543037 python3.9[36433]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723454.2693102-236-69798250502799/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:57:38 np0005543037 python3.9[36585]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 19:57:40 np0005543037 python3.9[36737]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:57:41 np0005543037 python3.9[36891]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:57:42 np0005543037 python3.9[37043]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  2 19:57:42 np0005543037 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 19:57:42 np0005543037 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 19:57:43 np0005543037 python3.9[37197]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 19:57:44 np0005543037 python3.9[37355]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 19:57:45 np0005543037 python3.9[37515]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  2 19:57:46 np0005543037 python3.9[37668]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 19:57:47 np0005543037 python3.9[37826]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  2 19:57:48 np0005543037 python3.9[37978]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 19:57:51 np0005543037 python3.9[38131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 19:57:52 np0005543037 python3.9[38283]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 19:57:52 np0005543037 python3.9[38406]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723471.5245261-355-177500035114152/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 19:57:54 np0005543037 python3.9[38558]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 19:57:54 np0005543037 systemd[1]: Starting Load Kernel Modules...
Dec  2 19:57:54 np0005543037 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  2 19:57:54 np0005543037 kernel: Bridge firewalling registered
Dec  2 19:57:54 np0005543037 systemd-modules-load[38562]: Inserted module 'br_netfilter'
Dec  2 19:57:54 np0005543037 systemd[1]: Finished Load Kernel Modules.
Dec  2 19:57:55 np0005543037 python3.9[38717]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 19:57:55 np0005543037 python3.9[38840]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723474.5247805-378-245584183881638/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 19:57:56 np0005543037 python3.9[38992]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 19:57:59 np0005543037 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  2 19:57:59 np0005543037 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  2 19:58:00 np0005543037 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 19:58:00 np0005543037 systemd[1]: Starting man-db-cache-update.service...
Dec  2 19:58:00 np0005543037 systemd[1]: Reloading.
Dec  2 19:58:00 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:58:00 np0005543037 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 19:58:02 np0005543037 python3.9[40334]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 19:58:03 np0005543037 python3.9[41176]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  2 19:58:03 np0005543037 python3.9[41859]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 19:58:04 np0005543037 python3.9[42763]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:58:04 np0005543037 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  2 19:58:05 np0005543037 systemd[1]: Starting Authorization Manager...
Dec  2 19:58:05 np0005543037 polkitd[43396]: Started polkitd version 0.117
Dec  2 19:58:05 np0005543037 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  2 19:58:05 np0005543037 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 19:58:05 np0005543037 systemd[1]: Finished man-db-cache-update.service.
Dec  2 19:58:05 np0005543037 systemd[1]: man-db-cache-update.service: Consumed 6.848s CPU time.
Dec  2 19:58:05 np0005543037 systemd[1]: run-r362cddee764847aba208b94b00958fca.service: Deactivated successfully.
Dec  2 19:58:05 np0005543037 systemd[1]: Started Authorization Manager.
Dec  2 19:58:06 np0005543037 python3.9[43567]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 19:58:06 np0005543037 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  2 19:58:06 np0005543037 systemd[1]: tuned.service: Deactivated successfully.
Dec  2 19:58:06 np0005543037 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  2 19:58:06 np0005543037 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  2 19:58:07 np0005543037 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  2 19:58:08 np0005543037 python3.9[43729]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  2 19:58:10 np0005543037 python3.9[43881]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 19:58:10 np0005543037 systemd[1]: Reloading.
Dec  2 19:58:11 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:58:12 np0005543037 python3.9[44071]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 19:58:13 np0005543037 systemd[1]: Reloading.
Dec  2 19:58:13 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:58:14 np0005543037 python3.9[44260]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:58:15 np0005543037 python3.9[44413]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:58:15 np0005543037 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  2 19:58:16 np0005543037 python3.9[44566]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:58:18 np0005543037 python3.9[44728]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:58:19 np0005543037 python3.9[44881]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 19:58:19 np0005543037 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  2 19:58:19 np0005543037 systemd[1]: Stopped Apply Kernel Variables.
Dec  2 19:58:19 np0005543037 systemd[1]: Stopping Apply Kernel Variables...
Dec  2 19:58:19 np0005543037 systemd[1]: Starting Apply Kernel Variables...
Dec  2 19:58:19 np0005543037 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  2 19:58:19 np0005543037 systemd[1]: Finished Apply Kernel Variables.
Dec  2 19:58:20 np0005543037 systemd[1]: session-8.scope: Deactivated successfully.
Dec  2 19:58:20 np0005543037 systemd[1]: session-8.scope: Consumed 2min 23.325s CPU time.
Dec  2 19:58:20 np0005543037 systemd-logind[800]: Session 8 logged out. Waiting for processes to exit.
Dec  2 19:58:20 np0005543037 systemd-logind[800]: Removed session 8.
Dec  2 19:58:27 np0005543037 systemd-logind[800]: New session 9 of user zuul.
Dec  2 19:58:27 np0005543037 systemd[1]: Started Session 9 of User zuul.
Dec  2 19:58:28 np0005543037 python3.9[45064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:58:29 np0005543037 python3.9[45220]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  2 19:58:31 np0005543037 python3.9[45373]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 19:58:32 np0005543037 python3.9[45531]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 19:58:33 np0005543037 python3.9[45691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 19:58:34 np0005543037 python3.9[45775]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 19:58:38 np0005543037 python3.9[45941]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 19:58:40 np0005543037 irqbalance[792]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  2 19:58:40 np0005543037 irqbalance[792]: IRQ 27 affinity is now unmanaged
Dec  2 19:58:49 np0005543037 kernel: SELinux:  Converting 2730 SID table entries...
Dec  2 19:58:49 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:58:49 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:58:49 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:58:49 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:58:49 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:58:49 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:58:49 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:58:49 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  2 19:58:49 np0005543037 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  2 19:58:51 np0005543037 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 19:58:51 np0005543037 systemd[1]: Starting man-db-cache-update.service...
Dec  2 19:58:51 np0005543037 systemd[1]: Reloading.
Dec  2 19:58:51 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:58:51 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 19:58:51 np0005543037 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 19:58:52 np0005543037 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 19:58:52 np0005543037 systemd[1]: Finished man-db-cache-update.service.
Dec  2 19:58:52 np0005543037 systemd[1]: man-db-cache-update.service: Consumed 1.040s CPU time.
Dec  2 19:58:52 np0005543037 systemd[1]: run-re0044d9f0c424086b680e2045ee65649.service: Deactivated successfully.
Dec  2 19:58:53 np0005543037 python3.9[47038]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 19:58:53 np0005543037 systemd[1]: Reloading.
Dec  2 19:58:53 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:58:53 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 19:58:53 np0005543037 systemd[1]: Starting Open vSwitch Database Unit...
Dec  2 19:58:53 np0005543037 chown[47079]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  2 19:58:53 np0005543037 ovs-ctl[47084]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  2 19:58:53 np0005543037 ovs-ctl[47084]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  2 19:58:53 np0005543037 ovs-ctl[47084]: Starting ovsdb-server [  OK  ]
Dec  2 19:58:53 np0005543037 ovs-vsctl[47134]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  2 19:58:54 np0005543037 ovs-vsctl[47154]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"eda9fd7d-f2b1-4121-b9ac-fc31f8426272\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  2 19:58:54 np0005543037 ovs-ctl[47084]: Configuring Open vSwitch system IDs [  OK  ]
Dec  2 19:58:54 np0005543037 ovs-ctl[47084]: Enabling remote OVSDB managers [  OK  ]
Dec  2 19:58:54 np0005543037 systemd[1]: Started Open vSwitch Database Unit.
Dec  2 19:58:54 np0005543037 ovs-vsctl[47160]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  2 19:58:54 np0005543037 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  2 19:58:54 np0005543037 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  2 19:58:54 np0005543037 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  2 19:58:54 np0005543037 kernel: openvswitch: Open vSwitch switching datapath
Dec  2 19:58:54 np0005543037 ovs-ctl[47204]: Inserting openvswitch module [  OK  ]
Dec  2 19:58:54 np0005543037 ovs-ctl[47173]: Starting ovs-vswitchd [  OK  ]
Dec  2 19:58:54 np0005543037 ovs-vsctl[47224]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  2 19:58:54 np0005543037 ovs-ctl[47173]: Enabling remote OVSDB managers [  OK  ]
Dec  2 19:58:54 np0005543037 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  2 19:58:54 np0005543037 systemd[1]: Starting Open vSwitch...
Dec  2 19:58:54 np0005543037 systemd[1]: Finished Open vSwitch.
Dec  2 19:58:55 np0005543037 python3.9[47377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:58:56 np0005543037 python3.9[47529]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  2 19:58:57 np0005543037 kernel: SELinux:  Converting 2744 SID table entries...
Dec  2 19:58:57 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 19:58:57 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 19:58:57 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 19:58:57 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 19:58:57 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 19:58:57 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 19:58:57 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 19:58:59 np0005543037 python3.9[47684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:59:00 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  2 19:59:00 np0005543037 python3.9[47842]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 19:59:02 np0005543037 python3.9[47995]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:59:04 np0005543037 python3.9[48282]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  2 19:59:05 np0005543037 python3.9[48432]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 19:59:06 np0005543037 python3.9[48586]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 19:59:07 np0005543037 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 19:59:07 np0005543037 systemd[1]: Starting man-db-cache-update.service...
Dec  2 19:59:07 np0005543037 systemd[1]: Reloading.
Dec  2 19:59:08 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:59:08 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 19:59:08 np0005543037 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 19:59:08 np0005543037 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 19:59:08 np0005543037 systemd[1]: Finished man-db-cache-update.service.
Dec  2 19:59:08 np0005543037 systemd[1]: run-rfab00bddda594710a51eecc261c119d3.service: Deactivated successfully.
Dec  2 19:59:09 np0005543037 python3.9[48903]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 19:59:10 np0005543037 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  2 19:59:10 np0005543037 systemd[1]: Stopped Network Manager Wait Online.
Dec  2 19:59:10 np0005543037 systemd[1]: Stopping Network Manager Wait Online...
Dec  2 19:59:10 np0005543037 NetworkManager[7177]: <info>  [1764723550.5832] caught SIGTERM, shutting down normally.
Dec  2 19:59:10 np0005543037 systemd[1]: Stopping Network Manager...
Dec  2 19:59:10 np0005543037 NetworkManager[7177]: <info>  [1764723550.5853] dhcp4 (eth0): canceled DHCP transaction
Dec  2 19:59:10 np0005543037 NetworkManager[7177]: <info>  [1764723550.5854] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:59:10 np0005543037 NetworkManager[7177]: <info>  [1764723550.5854] dhcp4 (eth0): state changed no lease
Dec  2 19:59:10 np0005543037 NetworkManager[7177]: <info>  [1764723550.5858] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 19:59:10 np0005543037 NetworkManager[7177]: <info>  [1764723550.5950] exiting (success)
Dec  2 19:59:10 np0005543037 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 19:59:10 np0005543037 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  2 19:59:10 np0005543037 systemd[1]: Stopped Network Manager.
Dec  2 19:59:10 np0005543037 systemd[1]: NetworkManager.service: Consumed 16.900s CPU time, 4.1M memory peak, read 0B from disk, written 18.0K to disk.
Dec  2 19:59:10 np0005543037 systemd[1]: Starting Network Manager...
Dec  2 19:59:10 np0005543037 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.6872] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:ea2ffd2b-9398-4d40-9798-3e760752a119)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.6876] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.6952] manager[0x557797ba6090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  2 19:59:10 np0005543037 systemd[1]: Starting Hostname Service...
Dec  2 19:59:10 np0005543037 systemd[1]: Started Hostname Service.
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8097] hostname: hostname: using hostnamed
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8098] hostname: static hostname changed from (none) to "compute-0"
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8107] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8116] manager[0x557797ba6090]: rfkill: Wi-Fi hardware radio set enabled
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8116] manager[0x557797ba6090]: rfkill: WWAN hardware radio set enabled
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8155] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8171] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8172] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8173] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8174] manager: Networking is enabled by state file
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8178] settings: Loaded settings plugin: keyfile (internal)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8185] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8232] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8248] dhcp: init: Using DHCP client 'internal'
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8252] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8261] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8275] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8290] device (lo): Activation: starting connection 'lo' (3c357ba2-4585-405b-8323-b1feb378cf6e)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8301] device (eth0): carrier: link connected
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8310] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8321] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8322] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8336] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8351] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8364] device (eth1): carrier: link connected
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8374] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8386] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40) (indicated)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8388] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8400] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8412] device (eth1): Activation: starting connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec  2 19:59:10 np0005543037 systemd[1]: Started Network Manager.
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8424] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8441] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8454] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8458] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8461] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8467] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8471] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8477] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8483] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8494] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8499] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8516] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8540] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8563] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec  2 19:59:10 np0005543037 systemd[1]: Starting Network Manager Wait Online...
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8568] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8578] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8668] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8682] device (lo): Activation: successful, device activated.
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8693] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8704] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8709] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8716] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8722] device (eth1): Activation: successful, device activated.
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8738] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8741] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8748] manager: NetworkManager state is now CONNECTED_SITE
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8757] device (eth0): Activation: successful, device activated.
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8768] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  2 19:59:10 np0005543037 NetworkManager[48912]: <info>  [1764723550.8793] manager: startup complete
Dec  2 19:59:10 np0005543037 systemd[1]: Finished Network Manager Wait Online.
Dec  2 19:59:11 np0005543037 python3.9[49129]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 19:59:16 np0005543037 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 19:59:16 np0005543037 systemd[1]: Starting man-db-cache-update.service...
Dec  2 19:59:16 np0005543037 systemd[1]: Reloading.
Dec  2 19:59:16 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 19:59:16 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 19:59:16 np0005543037 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 19:59:17 np0005543037 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 19:59:17 np0005543037 systemd[1]: Finished man-db-cache-update.service.
Dec  2 19:59:17 np0005543037 systemd[1]: run-reacd68e3634542beb1a0455801bdc15f.service: Deactivated successfully.
Dec  2 19:59:18 np0005543037 python3.9[49588]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 19:59:19 np0005543037 python3.9[49740]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:20 np0005543037 python3.9[49894]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:21 np0005543037 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 19:59:21 np0005543037 python3.9[50046]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:22 np0005543037 python3.9[50198]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:23 np0005543037 python3.9[50350]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:24 np0005543037 python3.9[50502]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 19:59:25 np0005543037 python3.9[50625]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723563.5629869-229-61643240079190/.source _original_basename=.dq1soo3g follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:25 np0005543037 python3.9[50777]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:26 np0005543037 python3.9[50929]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  2 19:59:27 np0005543037 python3.9[51081]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:30 np0005543037 python3.9[51508]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  2 19:59:31 np0005543037 ansible-async_wrapper.py[51683]: Invoked with j806931486018 300 /home/zuul/.ansible/tmp/ansible-tmp-1764723570.8562083-295-225688965816612/AnsiballZ_edpm_os_net_config.py _
Dec  2 19:59:31 np0005543037 ansible-async_wrapper.py[51686]: Starting module and watcher
Dec  2 19:59:31 np0005543037 ansible-async_wrapper.py[51686]: Start watching 51687 (300)
Dec  2 19:59:31 np0005543037 ansible-async_wrapper.py[51687]: Start module (51687)
Dec  2 19:59:31 np0005543037 ansible-async_wrapper.py[51683]: Return async_wrapper task started.
Dec  2 19:59:32 np0005543037 python3.9[51688]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  2 19:59:32 np0005543037 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  2 19:59:32 np0005543037 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  2 19:59:32 np0005543037 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  2 19:59:32 np0005543037 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  2 19:59:32 np0005543037 kernel: cfg80211: failed to load regulatory.db
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6026] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6051] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6819] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6821] audit: op="connection-add" uuid="e1d2481b-6164-4a08-897f-9ee2e4788170" name="br-ex-br" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6845] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6847] audit: op="connection-add" uuid="9d1b2739-0b12-4874-b8c5-49405c85884d" name="br-ex-port" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6866] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6868] audit: op="connection-add" uuid="cb86081b-6ab8-4587-87ac-9aa91b7cefbf" name="eth1-port" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6888] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6890] audit: op="connection-add" uuid="e1932086-3560-4a59-8e0a-e1714d4606c9" name="vlan20-port" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6909] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6911] audit: op="connection-add" uuid="63452749-9f5d-479b-aa72-a599c547a27c" name="vlan21-port" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6929] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6931] audit: op="connection-add" uuid="18edac4f-6825-49d7-b415-29a1c561e6a3" name="vlan22-port" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6950] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6951] audit: op="connection-add" uuid="a304a50a-aad1-454d-afc8-26431c8e94ec" name="vlan23-port" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.6988] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7016] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7018] audit: op="connection-add" uuid="02fb9cd9-2d1d-4f7b-82d3-7567314e8c5f" name="br-ex-if" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7082] audit: op="connection-update" uuid="771916e5-3ce0-5ffe-bc07-7ed0f995ac40" name="ci-private-network" args="ipv6.routes,ipv6.addresses,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.dns,ovs-interface.type,connection.master,connection.port-type,connection.timestamp,connection.slave-type,connection.controller,ipv4.routes,ipv4.never-default,ipv4.addresses,ipv4.method,ipv4.dns,ipv4.routing-rules,ovs-external-ids.data" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7111] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7113] audit: op="connection-add" uuid="70048df0-7e06-42d8-83d3-244e265607b4" name="vlan20-if" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7139] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7141] audit: op="connection-add" uuid="aa05774f-fca6-4e42-a505-e286a47b7a5d" name="vlan21-if" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7168] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7170] audit: op="connection-add" uuid="42661d6a-7c05-4c96-81e2-72659e39c865" name="vlan22-if" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7195] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7198] audit: op="connection-add" uuid="7f6d0054-9067-437e-9d5d-98deb5c19148" name="vlan23-if" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7216] audit: op="connection-delete" uuid="1b189b81-0918-3f63-b174-3141827cccab" name="Wired connection 1" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7246] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7262] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7268] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e1d2481b-6164-4a08-897f-9ee2e4788170)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7269] audit: op="connection-activate" uuid="e1d2481b-6164-4a08-897f-9ee2e4788170" name="br-ex-br" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7271] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7282] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7287] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9d1b2739-0b12-4874-b8c5-49405c85884d)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7290] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7299] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7305] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (cb86081b-6ab8-4587-87ac-9aa91b7cefbf)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7307] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7318] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7324] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e1932086-3560-4a59-8e0a-e1714d4606c9)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7327] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7336] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7342] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (63452749-9f5d-479b-aa72-a599c547a27c)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7345] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7354] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7360] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (18edac4f-6825-49d7-b415-29a1c561e6a3)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7363] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7373] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7379] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (a304a50a-aad1-454d-afc8-26431c8e94ec)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7380] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7383] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7386] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7396] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7403] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7409] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (02fb9cd9-2d1d-4f7b-82d3-7567314e8c5f)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7410] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7413] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7414] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7416] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7417] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7427] device (eth1): disconnecting for new activation request.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7428] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7431] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7432] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7434] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7437] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7443] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7449] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (70048df0-7e06-42d8-83d3-244e265607b4)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7450] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7453] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7456] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7458] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7461] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7467] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7473] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (aa05774f-fca6-4e42-a505-e286a47b7a5d)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7474] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7478] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7480] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7482] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7485] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7491] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7496] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (42661d6a-7c05-4c96-81e2-72659e39c865)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7497] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7501] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7503] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7505] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7508] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7514] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7520] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7f6d0054-9067-437e-9d5d-98deb5c19148)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7521] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7525] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7527] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7528] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7530] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7549] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7551] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7555] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7557] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7567] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7572] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7575] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7579] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7582] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 kernel: ovs-system: entered promiscuous mode
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7602] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7609] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7613] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7616] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7622] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 kernel: Timeout policy base is empty
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7627] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7632] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7635] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7641] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 systemd-udevd[51695]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7646] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7654] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7657] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7662] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7667] dhcp4 (eth0): canceled DHCP transaction
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7667] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7667] dhcp4 (eth0): state changed no lease
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7669] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7682] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7686] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51689 uid=0 result="fail" reason="Device is not activated"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7690] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7696] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7734] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7740] dhcp4 (eth0): state changed new lease, address=38.102.83.36
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7744] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7752] device (eth1): disconnecting for new activation request.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7753] audit: op="connection-activate" uuid="771916e5-3ce0-5ffe-bc07-7ed0f995ac40" name="ci-private-network" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7830] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec  2 19:59:34 np0005543037 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.7909] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  2 19:59:34 np0005543037 kernel: br-ex: entered promiscuous mode
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8064] device (eth1): Activation: starting connection 'ci-private-network' (771916e5-3ce0-5ffe-bc07-7ed0f995ac40)
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8070] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8081] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8085] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8091] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8096] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8106] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8107] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8109] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8110] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8112] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8113] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8124] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8132] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8137] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8141] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8145] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8149] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8154] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8158] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8162] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 kernel: vlan22: entered promiscuous mode
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8167] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8171] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 systemd-udevd[51694]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8175] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8181] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8187] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8193] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8212] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8224] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8264] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8269] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8271] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 kernel: vlan20: entered promiscuous mode
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8276] device (eth1): Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8281] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8288] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 kernel: vlan21: entered promiscuous mode
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8350] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8381] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 systemd-udevd[51797]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8456] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8459] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8464] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 kernel: vlan23: entered promiscuous mode
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8471] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8490] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8596] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8601] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8602] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8608] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8624] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8684] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8684] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8688] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8703] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8749] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8772] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8780] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  2 19:59:34 np0005543037 NetworkManager[48912]: <info>  [1764723574.8795] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  2 19:59:35 np0005543037 python3.9[52047]: ansible-ansible.legacy.async_status Invoked with jid=j806931486018.51683 mode=status _async_dir=/root/.ansible_async
Dec  2 19:59:36 np0005543037 NetworkManager[48912]: <info>  [1764723576.0168] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec  2 19:59:36 np0005543037 NetworkManager[48912]: <info>  [1764723576.3547] checkpoint[0x557797b7c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  2 19:59:36 np0005543037 NetworkManager[48912]: <info>  [1764723576.3550] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51689 uid=0 result="success"
Dec  2 19:59:36 np0005543037 NetworkManager[48912]: <info>  [1764723576.7544] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec  2 19:59:36 np0005543037 NetworkManager[48912]: <info>  [1764723576.7561] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec  2 19:59:36 np0005543037 ansible-async_wrapper.py[51686]: 51687 still running (300)
Dec  2 19:59:37 np0005543037 NetworkManager[48912]: <info>  [1764723577.0261] audit: op="networking-control" arg="global-dns-configuration" pid=51689 uid=0 result="success"
Dec  2 19:59:37 np0005543037 NetworkManager[48912]: <info>  [1764723577.0297] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  2 19:59:37 np0005543037 NetworkManager[48912]: <info>  [1764723577.0330] audit: op="networking-control" arg="global-dns-configuration" pid=51689 uid=0 result="success"
Dec  2 19:59:37 np0005543037 NetworkManager[48912]: <info>  [1764723577.0363] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec  2 19:59:37 np0005543037 NetworkManager[48912]: <info>  [1764723577.2868] checkpoint[0x557797b7ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  2 19:59:37 np0005543037 NetworkManager[48912]: <info>  [1764723577.2874] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51689 uid=0 result="success"
Dec  2 19:59:37 np0005543037 ansible-async_wrapper.py[51687]: Module complete (51687)
Dec  2 19:59:39 np0005543037 python3.9[52154]: ansible-ansible.legacy.async_status Invoked with jid=j806931486018.51683 mode=status _async_dir=/root/.ansible_async
Dec  2 19:59:40 np0005543037 python3.9[52253]: ansible-ansible.legacy.async_status Invoked with jid=j806931486018.51683 mode=cleanup _async_dir=/root/.ansible_async
Dec  2 19:59:40 np0005543037 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  2 19:59:41 np0005543037 python3.9[52408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 19:59:41 np0005543037 python3.9[52531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723580.5806954-322-199017558531633/.source.returncode _original_basename=.0fqgky8x follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:41 np0005543037 ansible-async_wrapper.py[51686]: Done in kid B.
Dec  2 19:59:42 np0005543037 python3.9[52684]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 19:59:43 np0005543037 python3.9[52807]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723582.1366723-338-229455501789424/.source.cfg _original_basename=.9a0tc_0x follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 19:59:44 np0005543037 python3.9[52959]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 19:59:44 np0005543037 systemd[1]: Reloading Network Manager...
Dec  2 19:59:44 np0005543037 NetworkManager[48912]: <info>  [1764723584.5448] audit: op="reload" arg="0" pid=52963 uid=0 result="success"
Dec  2 19:59:44 np0005543037 NetworkManager[48912]: <info>  [1764723584.5456] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  2 19:59:44 np0005543037 systemd[1]: Reloaded Network Manager.
Dec  2 19:59:45 np0005543037 systemd[1]: session-9.scope: Deactivated successfully.
Dec  2 19:59:45 np0005543037 systemd[1]: session-9.scope: Consumed 56.490s CPU time.
Dec  2 19:59:45 np0005543037 systemd-logind[800]: Session 9 logged out. Waiting for processes to exit.
Dec  2 19:59:45 np0005543037 systemd-logind[800]: Removed session 9.
Dec  2 19:59:50 np0005543037 systemd-logind[800]: New session 10 of user zuul.
Dec  2 19:59:50 np0005543037 systemd[1]: Started Session 10 of User zuul.
Dec  2 19:59:51 np0005543037 python3.9[53147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 19:59:53 np0005543037 python3.9[53302]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 19:59:54 np0005543037 python3.9[53495]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 19:59:54 np0005543037 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  2 19:59:55 np0005543037 systemd[1]: session-10.scope: Deactivated successfully.
Dec  2 19:59:55 np0005543037 systemd[1]: session-10.scope: Consumed 2.860s CPU time.
Dec  2 19:59:55 np0005543037 systemd-logind[800]: Session 10 logged out. Waiting for processes to exit.
Dec  2 19:59:55 np0005543037 systemd-logind[800]: Removed session 10.
Dec  2 20:00:00 np0005543037 systemd-logind[800]: New session 11 of user zuul.
Dec  2 20:00:00 np0005543037 systemd[1]: Started Session 11 of User zuul.
Dec  2 20:00:02 np0005543037 python3.9[53679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:00:03 np0005543037 python3.9[53834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:00:04 np0005543037 python3.9[53990]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:00:05 np0005543037 python3.9[54075]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 20:00:07 np0005543037 python3.9[54228]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:00:09 np0005543037 python3.9[54423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:00:10 np0005543037 python3.9[54575]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:00:10 np0005543037 systemd[1]: var-lib-containers-storage-overlay-compat3800328645-merged.mount: Deactivated successfully.
Dec  2 20:00:10 np0005543037 podman[54576]: 2025-12-03 01:00:10.427959689 +0000 UTC m=+0.072113342 system refresh
Dec  2 20:00:11 np0005543037 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 20:00:11 np0005543037 python3.9[54739]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:00:12 np0005543037 python3.9[54862]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723610.7084196-79-218479120912643/.source.json follow=False _original_basename=podman_network_config.j2 checksum=bebc1be99e667a6cdefc816a6f456d6e46ef811e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:00:13 np0005543037 python3.9[55014]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:00:13 np0005543037 python3.9[55137]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723612.548454-94-204791102889799/.source.conf follow=False _original_basename=registries.conf.j2 checksum=88b6a52c62914061ba0322e1e0763af09791b362 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:00:14 np0005543037 python3.9[55289]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:00:15 np0005543037 python3.9[55441]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:00:16 np0005543037 python3.9[55593]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:00:17 np0005543037 python3.9[55745]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:00:18 np0005543037 python3.9[55897]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 20:00:20 np0005543037 python3.9[56050]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:00:21 np0005543037 python3.9[56206]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:00:22 np0005543037 python3.9[56358]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:00:23 np0005543037 python3.9[56510]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:00:24 np0005543037 python3.9[56663]: ansible-service_facts Invoked
Dec  2 20:00:24 np0005543037 network[56680]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 20:00:24 np0005543037 network[56681]: 'network-scripts' will be removed from distribution in near future.
Dec  2 20:00:24 np0005543037 network[56682]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 20:00:31 np0005543037 python3.9[57134]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 20:00:33 np0005543037 python3.9[57287]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  2 20:00:35 np0005543037 python3.9[57439]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:00:35 np0005543037 python3.9[57564]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723634.4930625-238-187471491673923/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:00:36 np0005543037 python3.9[57718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:00:37 np0005543037 python3.9[57843]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723636.1788118-253-276213849071225/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:00:38 np0005543037 python3.9[57997]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:00:40 np0005543037 python3.9[58151]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:00:41 np0005543037 python3.9[58235]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:00:42 np0005543037 python3.9[58389]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:00:43 np0005543037 python3.9[58473]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:00:43 np0005543037 chronyd[799]: chronyd exiting
Dec  2 20:00:43 np0005543037 systemd[1]: Stopping NTP client/server...
Dec  2 20:00:43 np0005543037 systemd[1]: chronyd.service: Deactivated successfully.
Dec  2 20:00:43 np0005543037 systemd[1]: Stopped NTP client/server.
Dec  2 20:00:43 np0005543037 systemd[1]: Starting NTP client/server...
Dec  2 20:00:43 np0005543037 chronyd[58481]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  2 20:00:43 np0005543037 chronyd[58481]: Frequency -27.935 +/- 0.470 ppm read from /var/lib/chrony/drift
Dec  2 20:00:43 np0005543037 chronyd[58481]: Loaded seccomp filter (level 2)
Dec  2 20:00:43 np0005543037 systemd[1]: Started NTP client/server.
Dec  2 20:00:44 np0005543037 systemd[1]: session-11.scope: Deactivated successfully.
Dec  2 20:00:44 np0005543037 systemd[1]: session-11.scope: Consumed 31.032s CPU time.
Dec  2 20:00:44 np0005543037 systemd-logind[800]: Session 11 logged out. Waiting for processes to exit.
Dec  2 20:00:44 np0005543037 systemd-logind[800]: Removed session 11.
Dec  2 20:00:50 np0005543037 systemd-logind[800]: New session 12 of user zuul.
Dec  2 20:00:50 np0005543037 systemd[1]: Started Session 12 of User zuul.
Dec  2 20:00:51 np0005543037 python3.9[58662]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:00:52 np0005543037 python3.9[58814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:00:53 np0005543037 python3.9[58937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723652.1736214-34-273263560277370/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:00:54 np0005543037 systemd[1]: session-12.scope: Deactivated successfully.
Dec  2 20:00:54 np0005543037 systemd[1]: session-12.scope: Consumed 2.061s CPU time.
Dec  2 20:00:54 np0005543037 systemd-logind[800]: Session 12 logged out. Waiting for processes to exit.
Dec  2 20:00:54 np0005543037 systemd-logind[800]: Removed session 12.
Dec  2 20:00:59 np0005543037 systemd-logind[800]: New session 13 of user zuul.
Dec  2 20:00:59 np0005543037 systemd[1]: Started Session 13 of User zuul.
Dec  2 20:01:01 np0005543037 python3.9[59115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:01:02 np0005543037 python3.9[59286]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:03 np0005543037 python3.9[59461]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:04 np0005543037 python3.9[59584]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764723662.5702708-41-55121153701005/.source.json _original_basename=.hpn5ugi2 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:05 np0005543037 python3.9[59736]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:06 np0005543037 python3.9[59859]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723665.0302866-64-170354918919978/.source _original_basename=.osbygo2v follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:07 np0005543037 python3.9[60011]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:01:08 np0005543037 python3.9[60163]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:08 np0005543037 python3.9[60286]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723667.4517655-88-59904245999159/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:01:09 np0005543037 python3.9[60438]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:10 np0005543037 python3.9[60561]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723669.1934643-88-38801202557653/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:01:11 np0005543037 python3.9[60713]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:12 np0005543037 python3.9[60865]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:13 np0005543037 python3.9[60988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723671.487979-125-257188047968297/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:13 np0005543037 python3.9[61140]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:14 np0005543037 python3.9[61263]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723673.2694209-140-1634923374622/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:15 np0005543037 python3.9[61415]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:01:15 np0005543037 systemd[1]: Reloading.
Dec  2 20:01:16 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:01:16 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:01:16 np0005543037 systemd[1]: Reloading.
Dec  2 20:01:16 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:01:16 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:01:16 np0005543037 systemd[1]: Starting EDPM Container Shutdown...
Dec  2 20:01:16 np0005543037 systemd[1]: Finished EDPM Container Shutdown.
Dec  2 20:01:17 np0005543037 python3.9[61644]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:17 np0005543037 python3.9[61767]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723676.7405562-163-202003708782574/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:18 np0005543037 python3.9[61919]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:19 np0005543037 python3.9[62042]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723678.2125723-178-20500503554652/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:20 np0005543037 python3.9[62194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:01:20 np0005543037 systemd[1]: Reloading.
Dec  2 20:01:20 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:01:20 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:01:21 np0005543037 systemd[1]: Reloading.
Dec  2 20:01:21 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:01:21 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:01:21 np0005543037 systemd[1]: Starting Create netns directory...
Dec  2 20:01:21 np0005543037 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 20:01:21 np0005543037 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 20:01:21 np0005543037 systemd[1]: Finished Create netns directory.
Dec  2 20:01:22 np0005543037 python3.9[62422]: ansible-ansible.builtin.service_facts Invoked
Dec  2 20:01:22 np0005543037 network[62439]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 20:01:22 np0005543037 network[62440]: 'network-scripts' will be removed from distribution in near future.
Dec  2 20:01:22 np0005543037 network[62441]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 20:01:28 np0005543037 python3.9[62703]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:01:28 np0005543037 systemd[1]: Reloading.
Dec  2 20:01:28 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:01:28 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:01:28 np0005543037 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  2 20:01:28 np0005543037 iptables.init[62743]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  2 20:01:28 np0005543037 iptables.init[62743]: iptables: Flushing firewall rules: [  OK  ]
Dec  2 20:01:28 np0005543037 systemd[1]: iptables.service: Deactivated successfully.
Dec  2 20:01:28 np0005543037 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  2 20:01:29 np0005543037 python3.9[62939]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:01:31 np0005543037 python3.9[63093]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:01:31 np0005543037 systemd[1]: Reloading.
Dec  2 20:01:31 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:01:31 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:01:32 np0005543037 systemd[1]: Starting Netfilter Tables...
Dec  2 20:01:32 np0005543037 systemd[1]: Finished Netfilter Tables.
Dec  2 20:01:33 np0005543037 python3.9[63285]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:01:34 np0005543037 python3.9[63438]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:35 np0005543037 python3.9[63563]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723693.9896095-247-196790145501321/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:36 np0005543037 python3.9[63716]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:01:36 np0005543037 systemd[1]: Reloading OpenSSH server daemon...
Dec  2 20:01:36 np0005543037 systemd[1]: Reloaded OpenSSH server daemon.
Dec  2 20:01:37 np0005543037 python3.9[63872]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:38 np0005543037 python3.9[64024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:38 np0005543037 python3.9[64147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723697.6273012-278-104958785570483/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:40 np0005543037 python3.9[64299]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  2 20:01:40 np0005543037 systemd[1]: Starting Time & Date Service...
Dec  2 20:01:40 np0005543037 systemd[1]: Started Time & Date Service.
Dec  2 20:01:41 np0005543037 python3.9[64455]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:42 np0005543037 python3.9[64607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:42 np0005543037 python3.9[64730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723701.5408926-313-207055980954492/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:43 np0005543037 python3.9[64882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:44 np0005543037 python3.9[65005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723703.0046923-328-197381561334933/.source.yaml _original_basename=.mycgpd4_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:45 np0005543037 python3.9[65157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:46 np0005543037 python3.9[65280]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723704.5087607-343-136635693966243/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:46 np0005543037 python3.9[65432]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:01:47 np0005543037 python3.9[65585]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:01:48 np0005543037 python3[65738]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 20:01:49 np0005543037 python3.9[65890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:50 np0005543037 python3.9[66013]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723709.168731-382-196107179830500/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:51 np0005543037 python3.9[66165]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:52 np0005543037 python3.9[66288]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723710.73767-397-263405754327380/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:53 np0005543037 python3.9[66440]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:53 np0005543037 python3.9[66563]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723712.5029597-412-92431733530844/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:54 np0005543037 python3.9[66715]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:55 np0005543037 python3.9[66838]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723714.0065947-427-170385453476110/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:56 np0005543037 python3.9[66990]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:01:57 np0005543037 python3.9[67113]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723715.5116231-442-230110429081055/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:57 np0005543037 python3.9[67265]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:01:58 np0005543037 python3.9[67417]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:02:00 np0005543037 python3.9[67576]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:02:01 np0005543037 python3.9[67729]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:02:01 np0005543037 python3.9[67881]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:02:02 np0005543037 python3.9[68033]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  2 20:02:03 np0005543037 python3.9[68186]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  2 20:02:04 np0005543037 systemd[1]: session-13.scope: Deactivated successfully.
Dec  2 20:02:04 np0005543037 systemd[1]: session-13.scope: Consumed 45.088s CPU time.
Dec  2 20:02:04 np0005543037 systemd-logind[800]: Session 13 logged out. Waiting for processes to exit.
Dec  2 20:02:04 np0005543037 systemd-logind[800]: Removed session 13.
Dec  2 20:02:09 np0005543037 systemd-logind[800]: New session 14 of user zuul.
Dec  2 20:02:09 np0005543037 systemd[1]: Started Session 14 of User zuul.
Dec  2 20:02:10 np0005543037 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  2 20:02:10 np0005543037 python3.9[68369]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  2 20:02:11 np0005543037 python3.9[68521]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:02:12 np0005543037 python3.9[68673]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:02:14 np0005543037 python3.9[68825]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUXzfc0dZJxCJJ4PEHADvL0LyRTIDw765KVVRPjKe66bZHCDrMnH3lZh13FtxojtEeAMtDjWC+H3ZGbvKAjyg6wN6ZmxRsL7o57jFWbBEQCHr3VQojAmFhu1UrX7NiAqOVCHai4lYrpddO28T1lK3oP3KKbw3gMA9o0GCA5TlMf5uAu10Zmp6u/NuST5GBQqc8D2ID2cZ5OL+IJ5OedhsuV0SutU2S7A/ua95d57ddgc8ltJh/JzrnYCjHsD4NNKpp1HDuLXzKlMVFpbxi5ihzlepdP4BMWtBqKzvoCCD+KxwXBNVjKLo57B/h+kfTNX/PI8IkDAGLOxYZyPozHtsLiKtTLao7Q1nU67ZcSZbDPBluTaBcUuiS12fEsU2SjMVNRPDFBKj8pn5cXmIZJaLccIvvWYr4u9xIEA1aX0IjZS9FEHD+eVLVe3HkQ+rFJ2WgMARupAMDmyso43Cje+xIL0vZYayq3PyCWhVln1wW80k/cY/5JCqhzF2lelqLBlU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICuIgcpw897dA3mGBxBK8DwsvfOOhRnRBasT73h7OlLn#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITA4C6TXl/AXsVGH1teKmoFi3piNxhosC0B5paSBiifwK5pyHq3w8pYOtVe+KhAjGKZJREVbl0k3rnMeNo31ps=#012 create=True mode=0644 path=/tmp/ansible.tbw3opq2 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:02:15 np0005543037 python3.9[68977]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tbw3opq2' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:02:16 np0005543037 python3.9[69131]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tbw3opq2 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:02:16 np0005543037 systemd[1]: session-14.scope: Deactivated successfully.
Dec  2 20:02:16 np0005543037 systemd[1]: session-14.scope: Consumed 4.384s CPU time.
Dec  2 20:02:16 np0005543037 systemd-logind[800]: Session 14 logged out. Waiting for processes to exit.
Dec  2 20:02:16 np0005543037 systemd-logind[800]: Removed session 14.
Dec  2 20:02:22 np0005543037 systemd-logind[800]: New session 15 of user zuul.
Dec  2 20:02:22 np0005543037 systemd[1]: Started Session 15 of User zuul.
Dec  2 20:02:24 np0005543037 python3.9[69309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:02:25 np0005543037 python3.9[69465]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  2 20:02:26 np0005543037 python3.9[69619]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:02:27 np0005543037 python3.9[69772]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:02:28 np0005543037 python3.9[69925]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:02:29 np0005543037 python3.9[70079]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:02:30 np0005543037 python3.9[70234]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:02:31 np0005543037 systemd[1]: session-15.scope: Deactivated successfully.
Dec  2 20:02:31 np0005543037 systemd[1]: session-15.scope: Consumed 5.674s CPU time.
Dec  2 20:02:31 np0005543037 systemd-logind[800]: Session 15 logged out. Waiting for processes to exit.
Dec  2 20:02:31 np0005543037 systemd-logind[800]: Removed session 15.
Dec  2 20:02:37 np0005543037 systemd-logind[800]: New session 16 of user zuul.
Dec  2 20:02:37 np0005543037 systemd[1]: Started Session 16 of User zuul.
Dec  2 20:02:38 np0005543037 python3.9[70412]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:02:39 np0005543037 python3.9[70568]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:02:40 np0005543037 python3.9[70652]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  2 20:02:42 np0005543037 python3.9[70803]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:02:43 np0005543037 python3.9[70954]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 20:02:44 np0005543037 python3.9[71104]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:02:44 np0005543037 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 20:02:45 np0005543037 python3.9[71255]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:02:46 np0005543037 systemd[1]: session-16.scope: Deactivated successfully.
Dec  2 20:02:46 np0005543037 systemd[1]: session-16.scope: Consumed 6.673s CPU time.
Dec  2 20:02:46 np0005543037 systemd-logind[800]: Session 16 logged out. Waiting for processes to exit.
Dec  2 20:02:46 np0005543037 systemd-logind[800]: Removed session 16.
Dec  2 20:02:52 np0005543037 systemd-logind[800]: New session 17 of user zuul.
Dec  2 20:02:52 np0005543037 systemd[1]: Started Session 17 of User zuul.
Dec  2 20:02:53 np0005543037 chronyd[58481]: Selected source 167.160.187.179 (pool.ntp.org)
Dec  2 20:02:53 np0005543037 python3.9[71434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:02:55 np0005543037 python3.9[71590]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:02:56 np0005543037 python3.9[71742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:02:57 np0005543037 python3.9[71894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:02:58 np0005543037 python3.9[72017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723776.8415914-65-114394127413801/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b60dcba84b3e4fb617a490c112070b73c949335a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:02:59 np0005543037 python3.9[72169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:00 np0005543037 python3.9[72292]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723778.8327813-65-72766569506374/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0d1efd97dce1e1c7f057dca4a97cb1fb49ba3bf4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:00 np0005543037 python3.9[72444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:01 np0005543037 python3.9[72567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723780.4202187-65-21495615635438/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=92a51a9bbc603098437ab5af983ff5e779096e63 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:02 np0005543037 python3.9[72719]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:03 np0005543037 python3.9[72871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:04 np0005543037 python3.9[73023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:05 np0005543037 python3.9[73146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723783.6405196-124-259528894097759/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d55fd552526a772bf5e3784699784cee65404ed5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:05 np0005543037 python3.9[73298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:06 np0005543037 python3.9[73421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723785.3038852-124-61680363695397/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0d1efd97dce1e1c7f057dca4a97cb1fb49ba3bf4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:07 np0005543037 python3.9[73573]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:08 np0005543037 python3.9[73696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723786.7627301-124-156314046842763/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1b9df5c7eafafbcfe088505d80d8a06e3c7b4466 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:08 np0005543037 python3.9[73848]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:09 np0005543037 python3.9[74000]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:10 np0005543037 python3.9[74152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:11 np0005543037 python3.9[74275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723789.9022157-183-18529603661504/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e2647a2010a652e485acabe94eeb39508d65a0bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:11 np0005543037 python3.9[74427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:12 np0005543037 python3.9[74550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723791.3426628-183-214121697856966/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=775fef96b1ca8947276e166dfff5facf815492ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:13 np0005543037 python3.9[74702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:14 np0005543037 python3.9[74825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723792.9700813-183-135116643218560/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=85623a68291344524b32d6dec8b93c00901cb0e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:15 np0005543037 python3.9[74977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:15 np0005543037 python3.9[75129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:16 np0005543037 python3.9[75281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:17 np0005543037 python3.9[75404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723796.3131318-242-103449171502661/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=068de18b8da001226dc33069c5839a972e795c9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:18 np0005543037 python3.9[75556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:19 np0005543037 python3.9[75679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723797.8065267-242-34745148559102/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2a64f1b8009feb5d4193c68d35401643b8ae94ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:19 np0005543037 python3.9[75831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:20 np0005543037 python3.9[75954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723799.3041406-242-255786176711678/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=89de33ad168226810c0097243f44ecd47145b3c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:22 np0005543037 python3.9[76106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:22 np0005543037 python3.9[76258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:23 np0005543037 python3.9[76381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723802.3329875-310-51307819843582/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:24 np0005543037 python3.9[76533]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:25 np0005543037 python3.9[76685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:26 np0005543037 python3.9[76808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723805.029667-334-53845862355604/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:27 np0005543037 python3.9[76960]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:28 np0005543037 python3.9[77112]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:29 np0005543037 python3.9[77235]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723807.8865862-358-121369756982184/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:30 np0005543037 python3.9[77387]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:30 np0005543037 python3.9[77539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:31 np0005543037 python3.9[77662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723810.2527897-382-39508486280094/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:32 np0005543037 python3.9[77814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:33 np0005543037 python3.9[77966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:34 np0005543037 python3.9[78089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723812.6598082-406-145118192353420/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:35 np0005543037 python3.9[78241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:35 np0005543037 python3.9[78393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:36 np0005543037 python3.9[78516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723815.2941115-430-119130246927620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:37 np0005543037 systemd-logind[800]: Session 17 logged out. Waiting for processes to exit.
Dec  2 20:03:37 np0005543037 systemd[1]: session-17.scope: Deactivated successfully.
Dec  2 20:03:37 np0005543037 systemd[1]: session-17.scope: Consumed 34.802s CPU time.
Dec  2 20:03:37 np0005543037 systemd-logind[800]: Removed session 17.
Dec  2 20:03:43 np0005543037 systemd-logind[800]: New session 18 of user zuul.
Dec  2 20:03:43 np0005543037 systemd[1]: Started Session 18 of User zuul.
Dec  2 20:03:44 np0005543037 python3.9[78694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:03:45 np0005543037 python3.9[78850]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:46 np0005543037 python3.9[79002]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:03:47 np0005543037 python3.9[79152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:03:48 np0005543037 python3.9[79304]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  2 20:03:50 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  2 20:03:50 np0005543037 python3.9[79460]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:03:51 np0005543037 python3.9[79544]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 20:03:53 np0005543037 python3.9[79697]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 20:03:55 np0005543037 python3[79852]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  2 20:03:55 np0005543037 python3.9[80004]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:56 np0005543037 python3.9[80156]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:57 np0005543037 python3.9[80234]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:58 np0005543037 python3.9[80387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:03:59 np0005543037 python3.9[80465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jbptfvyw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:03:59 np0005543037 python3.9[80617]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:00 np0005543037 python3.9[80695]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:01 np0005543037 python3.9[80847]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:04:02 np0005543037 python3[81000]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 20:04:03 np0005543037 python3.9[81152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:04 np0005543037 python3.9[81277]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723842.676047-157-64282342555869/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:05 np0005543037 python3.9[81429]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:05 np0005543037 python3.9[81554]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723844.3556914-172-123434415528750/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:06 np0005543037 python3.9[81706]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:07 np0005543037 python3.9[81831]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723845.896886-187-161120702790852/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:08 np0005543037 python3.9[81983]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:08 np0005543037 python3.9[82108]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723847.502152-202-261790757201180/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:09 np0005543037 python3.9[82260]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:10 np0005543037 python3.9[82385]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764723849.0628862-217-242799789866787/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:11 np0005543037 python3.9[82537]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:12 np0005543037 python3.9[82689]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:04:13 np0005543037 python3.9[82844]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:14 np0005543037 python3.9[82996]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:04:14 np0005543037 python3.9[83149]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:04:15 np0005543037 python3.9[83303]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:04:16 np0005543037 python3.9[83458]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:17 np0005543037 python3.9[83608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:04:19 np0005543037 python3.9[83761]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:04:19 np0005543037 ovs-vsctl[83762]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  2 20:04:20 np0005543037 python3.9[83914]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:04:20 np0005543037 python3.9[84069]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:04:20 np0005543037 ovs-vsctl[84070]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  2 20:04:21 np0005543037 python3.9[84220]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:04:22 np0005543037 python3.9[84374]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:04:23 np0005543037 python3.9[84526]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:25 np0005543037 python3.9[84604]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:04:25 np0005543037 python3.9[84756]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:26 np0005543037 python3.9[84834]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:04:27 np0005543037 python3.9[84986]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:28 np0005543037 python3.9[85138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:28 np0005543037 python3.9[85216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:29 np0005543037 python3.9[85368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:30 np0005543037 python3.9[85446]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:30 np0005543037 python3.9[85598]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:04:30 np0005543037 systemd[1]: Reloading.
Dec  2 20:04:31 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:04:31 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:04:32 np0005543037 python3.9[85788]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:32 np0005543037 python3.9[85866]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:33 np0005543037 python3.9[86018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:34 np0005543037 python3.9[86096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:35 np0005543037 python3.9[86248]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:04:35 np0005543037 systemd[1]: Reloading.
Dec  2 20:04:35 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:04:35 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:04:35 np0005543037 systemd[1]: Starting Create netns directory...
Dec  2 20:04:35 np0005543037 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  2 20:04:35 np0005543037 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  2 20:04:35 np0005543037 systemd[1]: Finished Create netns directory.
Dec  2 20:04:36 np0005543037 python3.9[86441]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:04:37 np0005543037 python3.9[86593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:38 np0005543037 python3.9[86716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764723876.761959-468-174589115804978/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:04:39 np0005543037 python3.9[86868]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:04:39 np0005543037 python3.9[87020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:04:40 np0005543037 python3.9[87143]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764723879.3958988-493-96981597582269/.source.json _original_basename=.ih8im7m8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:41 np0005543037 python3.9[87295]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:44 np0005543037 python3.9[87722]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  2 20:04:45 np0005543037 python3.9[87874]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 20:04:46 np0005543037 python3.9[88026]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  2 20:04:46 np0005543037 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 20:04:47 np0005543037 python3[88189]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 20:04:47 np0005543037 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 20:04:50 np0005543037 systemd[1]: var-lib-containers-storage-overlay-compat3174505029-lower\x2dmapped.mount: Deactivated successfully.
Dec  2 20:04:53 np0005543037 podman[88202]: 2025-12-03 01:04:53.632419469 +0000 UTC m=+5.757131934 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 20:04:53 np0005543037 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 20:04:53 np0005543037 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 20:04:53 np0005543037 podman[88321]: 2025-12-03 01:04:53.810688522 +0000 UTC m=+0.053773895 container create 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  2 20:04:53 np0005543037 podman[88321]: 2025-12-03 01:04:53.783162004 +0000 UTC m=+0.026247397 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 20:04:53 np0005543037 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  2 20:04:53 np0005543037 python3[88189]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  2 20:04:54 np0005543037 python3.9[88512]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:04:55 np0005543037 python3.9[88666]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:56 np0005543037 python3.9[88742]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:04:57 np0005543037 python3.9[88893]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764723896.3928676-581-121462620523232/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:04:57 np0005543037 python3.9[88969]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:04:57 np0005543037 systemd[1]: Reloading.
Dec  2 20:04:58 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:04:58 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:04:58 np0005543037 python3.9[89080]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:04:59 np0005543037 systemd[1]: Reloading.
Dec  2 20:05:00 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:05:00 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:05:00 np0005543037 systemd[1]: Starting ovn_controller container...
Dec  2 20:05:00 np0005543037 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  2 20:05:00 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:05:00 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808bcb6384b23452bcd1d6368dafee09d321969a57d81b4723ebe2407c4e8f83/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  2 20:05:00 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.
Dec  2 20:05:00 np0005543037 podman[89121]: 2025-12-03 01:05:00.431117861 +0000 UTC m=+0.182455393 container init 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: + sudo -E kolla_set_configs
Dec  2 20:05:00 np0005543037 podman[89121]: 2025-12-03 01:05:00.479636378 +0000 UTC m=+0.230973890 container start 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 20:05:00 np0005543037 edpm-start-podman-container[89121]: ovn_controller
Dec  2 20:05:00 np0005543037 systemd[1]: Created slice User Slice of UID 0.
Dec  2 20:05:00 np0005543037 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  2 20:05:00 np0005543037 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  2 20:05:00 np0005543037 systemd[1]: Starting User Manager for UID 0...
Dec  2 20:05:00 np0005543037 edpm-start-podman-container[89120]: Creating additional drop-in dependency for "ovn_controller" (926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f)
Dec  2 20:05:00 np0005543037 podman[89141]: 2025-12-03 01:05:00.611297124 +0000 UTC m=+0.118415681 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  2 20:05:00 np0005543037 systemd[1]: 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f-77c277b09b6e8051.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:05:00 np0005543037 systemd[1]: 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f-77c277b09b6e8051.service: Failed with result 'exit-code'.
Dec  2 20:05:00 np0005543037 systemd[1]: Reloading.
Dec  2 20:05:00 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:05:00 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:05:00 np0005543037 systemd[89174]: Queued start job for default target Main User Target.
Dec  2 20:05:00 np0005543037 systemd[89174]: Created slice User Application Slice.
Dec  2 20:05:00 np0005543037 systemd[89174]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  2 20:05:00 np0005543037 systemd[89174]: Started Daily Cleanup of User's Temporary Directories.
Dec  2 20:05:00 np0005543037 systemd[89174]: Reached target Paths.
Dec  2 20:05:00 np0005543037 systemd[89174]: Reached target Timers.
Dec  2 20:05:00 np0005543037 systemd[89174]: Starting D-Bus User Message Bus Socket...
Dec  2 20:05:00 np0005543037 systemd[89174]: Starting Create User's Volatile Files and Directories...
Dec  2 20:05:00 np0005543037 systemd[89174]: Listening on D-Bus User Message Bus Socket.
Dec  2 20:05:00 np0005543037 systemd[89174]: Reached target Sockets.
Dec  2 20:05:00 np0005543037 systemd[89174]: Finished Create User's Volatile Files and Directories.
Dec  2 20:05:00 np0005543037 systemd[89174]: Reached target Basic System.
Dec  2 20:05:00 np0005543037 systemd[89174]: Reached target Main User Target.
Dec  2 20:05:00 np0005543037 systemd[89174]: Startup finished in 183ms.
Dec  2 20:05:00 np0005543037 systemd[1]: Started User Manager for UID 0.
Dec  2 20:05:00 np0005543037 systemd[1]: Started ovn_controller container.
Dec  2 20:05:00 np0005543037 systemd[1]: Started Session c1 of User root.
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: INFO:__main__:Validating config file
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: INFO:__main__:Writing out command to execute
Dec  2 20:05:00 np0005543037 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: ++ cat /run_command
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: + ARGS=
Dec  2 20:05:00 np0005543037 ovn_controller[89134]: + sudo kolla_copy_cacerts
Dec  2 20:05:01 np0005543037 systemd[1]: Started Session c2 of User root.
Dec  2 20:05:01 np0005543037 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: + [[ ! -n '' ]]
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: + . kolla_extend_start
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: + umask 0022
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.2015] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.2038] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.2063] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.2072] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.2078] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  2 20:05:01 np0005543037 kernel: br-int: entered promiscuous mode
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  2 20:05:01 np0005543037 systemd-udevd[89290]: Network interface NamePolicy= disabled on kernel command line.
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00022|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 20:05:01 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:01Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.4434] manager: (ovn-b585df-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  2 20:05:01 np0005543037 kernel: genev_sys_6081: entered promiscuous mode
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.4753] device (genev_sys_6081): carrier: link connected
Dec  2 20:05:01 np0005543037 NetworkManager[48912]: <info>  [1764723901.4759] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec  2 20:05:01 np0005543037 python3.9[89399]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:01 np0005543037 ovs-vsctl[89400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  2 20:05:02 np0005543037 python3.9[89552]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:02 np0005543037 ovs-vsctl[89554]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  2 20:05:03 np0005543037 python3.9[89707]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:03 np0005543037 ovs-vsctl[89708]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  2 20:05:04 np0005543037 systemd-logind[800]: Session 18 logged out. Waiting for processes to exit.
Dec  2 20:05:04 np0005543037 systemd[1]: session-18.scope: Deactivated successfully.
Dec  2 20:05:04 np0005543037 systemd[1]: session-18.scope: Consumed 1min 8.652s CPU time.
Dec  2 20:05:04 np0005543037 systemd-logind[800]: Removed session 18.
Dec  2 20:05:10 np0005543037 systemd-logind[800]: New session 20 of user zuul.
Dec  2 20:05:10 np0005543037 systemd[1]: Started Session 20 of User zuul.
Dec  2 20:05:11 np0005543037 systemd[1]: Stopping User Manager for UID 0...
Dec  2 20:05:11 np0005543037 systemd[89174]: Activating special unit Exit the Session...
Dec  2 20:05:11 np0005543037 systemd[89174]: Stopped target Main User Target.
Dec  2 20:05:11 np0005543037 systemd[89174]: Stopped target Basic System.
Dec  2 20:05:11 np0005543037 systemd[89174]: Stopped target Paths.
Dec  2 20:05:11 np0005543037 systemd[89174]: Stopped target Sockets.
Dec  2 20:05:11 np0005543037 systemd[89174]: Stopped target Timers.
Dec  2 20:05:11 np0005543037 systemd[89174]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  2 20:05:11 np0005543037 systemd[89174]: Closed D-Bus User Message Bus Socket.
Dec  2 20:05:11 np0005543037 systemd[89174]: Stopped Create User's Volatile Files and Directories.
Dec  2 20:05:11 np0005543037 systemd[89174]: Removed slice User Application Slice.
Dec  2 20:05:11 np0005543037 systemd[89174]: Reached target Shutdown.
Dec  2 20:05:11 np0005543037 systemd[89174]: Finished Exit the Session.
Dec  2 20:05:11 np0005543037 systemd[89174]: Reached target Exit the Session.
Dec  2 20:05:11 np0005543037 systemd[1]: user@0.service: Deactivated successfully.
Dec  2 20:05:11 np0005543037 systemd[1]: Stopped User Manager for UID 0.
Dec  2 20:05:11 np0005543037 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  2 20:05:11 np0005543037 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  2 20:05:11 np0005543037 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  2 20:05:11 np0005543037 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  2 20:05:11 np0005543037 systemd[1]: Removed slice User Slice of UID 0.
Dec  2 20:05:11 np0005543037 python3.9[89886]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:05:12 np0005543037 python3.9[90047]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:14 np0005543037 python3.9[90212]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:05:14 np0005543037 systemd[1]: Reloading.
Dec  2 20:05:14 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:05:14 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:05:15 np0005543037 python3.9[90398]: ansible-ansible.builtin.service_facts Invoked
Dec  2 20:05:15 np0005543037 network[90415]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 20:05:15 np0005543037 network[90416]: 'network-scripts' will be removed from distribution in near future.
Dec  2 20:05:15 np0005543037 network[90417]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 20:05:21 np0005543037 python3.9[90679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:05:22 np0005543037 python3.9[90832]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:05:23 np0005543037 python3.9[90985]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:05:24 np0005543037 python3.9[91138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:05:25 np0005543037 python3.9[91291]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:05:26 np0005543037 python3.9[91444]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:05:27 np0005543037 python3.9[91597]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:05:29 np0005543037 python3.9[91750]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:29 np0005543037 python3.9[91902]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:30 np0005543037 python3.9[92054]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:30 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:30Z|00025|memory|INFO|16000 kB peak resident set size after 29.8 seconds
Dec  2 20:05:30 np0005543037 ovn_controller[89134]: 2025-12-03T01:05:30Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  2 20:05:30 np0005543037 podman[92109]: 2025-12-03 01:05:30.884788592 +0000 UTC m=+0.131363422 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  2 20:05:31 np0005543037 python3.9[92232]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:32 np0005543037 python3.9[92384]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:32 np0005543037 python3.9[92537]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:33 np0005543037 python3.9[92689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:34 np0005543037 python3.9[92841]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:35 np0005543037 python3.9[92993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:36 np0005543037 python3.9[93145]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:36 np0005543037 python3.9[93297]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:37 np0005543037 python3.9[93449]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:38 np0005543037 python3.9[93601]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:39 np0005543037 python3.9[93753]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:05:40 np0005543037 python3.9[93905]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:41 np0005543037 python3.9[94057]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 20:05:42 np0005543037 python3.9[94209]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:05:42 np0005543037 systemd[1]: Reloading.
Dec  2 20:05:42 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:05:42 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:05:43 np0005543037 python3.9[94396]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:44 np0005543037 python3.9[94549]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:45 np0005543037 python3.9[94702]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:45 np0005543037 python3.9[94855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:46 np0005543037 python3.9[95008]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:47 np0005543037 python3.9[95161]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:48 np0005543037 python3.9[95314]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:05:49 np0005543037 python3.9[95467]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  2 20:05:50 np0005543037 python3.9[95620]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 20:05:51 np0005543037 python3.9[95778]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 20:05:53 np0005543037 python3.9[95938]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:05:54 np0005543037 python3.9[96022]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 20:06:01 np0005543037 podman[96057]: 2025-12-03 01:06:01.873477926 +0000 UTC m=+0.131429573 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  2 20:06:21 np0005543037 kernel: SELinux:  Converting 2757 SID table entries...
Dec  2 20:06:21 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 20:06:21 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 20:06:21 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 20:06:21 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 20:06:21 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 20:06:21 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 20:06:21 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 20:06:30 np0005543037 kernel: SELinux:  Converting 2757 SID table entries...
Dec  2 20:06:30 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 20:06:30 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 20:06:30 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 20:06:30 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 20:06:30 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 20:06:30 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 20:06:30 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 20:06:32 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  2 20:06:32 np0005543037 podman[96253]: 2025-12-03 01:06:32.894484718 +0000 UTC m=+0.137511850 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 20:07:03 np0005543037 podman[105516]: 2025-12-03 01:07:03.878027859 +0000 UTC m=+0.131158322 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 20:07:33 np0005543037 kernel: SELinux:  Converting 2758 SID table entries...
Dec  2 20:07:33 np0005543037 kernel: SELinux:  policy capability network_peer_controls=1
Dec  2 20:07:33 np0005543037 kernel: SELinux:  policy capability open_perms=1
Dec  2 20:07:33 np0005543037 kernel: SELinux:  policy capability extended_socket_class=1
Dec  2 20:07:33 np0005543037 kernel: SELinux:  policy capability always_check_network=0
Dec  2 20:07:33 np0005543037 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  2 20:07:33 np0005543037 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  2 20:07:33 np0005543037 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  2 20:07:34 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  2 20:07:34 np0005543037 podman[113121]: 2025-12-03 01:07:34.49444082 +0000 UTC m=+0.157533759 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  2 20:07:34 np0005543037 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  2 20:07:34 np0005543037 dbus-broker-launch[767]: Noticed file-system modification, trigger reload.
Dec  2 20:07:43 np0005543037 systemd[1]: Stopping OpenSSH server daemon...
Dec  2 20:07:43 np0005543037 systemd[1]: sshd.service: Deactivated successfully.
Dec  2 20:07:43 np0005543037 systemd[1]: Stopped OpenSSH server daemon.
Dec  2 20:07:43 np0005543037 systemd[1]: sshd.service: Consumed 1.942s CPU time, read 32.0K from disk, written 4.0K to disk.
Dec  2 20:07:43 np0005543037 systemd[1]: Stopped target sshd-keygen.target.
Dec  2 20:07:43 np0005543037 systemd[1]: Stopping sshd-keygen.target...
Dec  2 20:07:43 np0005543037 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 20:07:43 np0005543037 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 20:07:43 np0005543037 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  2 20:07:43 np0005543037 systemd[1]: Reached target sshd-keygen.target.
Dec  2 20:07:43 np0005543037 systemd[1]: Starting OpenSSH server daemon...
Dec  2 20:07:43 np0005543037 systemd[1]: Started OpenSSH server daemon.
Dec  2 20:07:46 np0005543037 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  2 20:07:46 np0005543037 systemd[1]: Starting man-db-cache-update.service...
Dec  2 20:07:46 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:46 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:07:46 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:46 np0005543037 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  2 20:07:51 np0005543037 python3.9[117935]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 20:07:51 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:51 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:07:51 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:52 np0005543037 python3.9[119120]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 20:07:52 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:52 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:07:52 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:53 np0005543037 python3.9[120351]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 20:07:53 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:53 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:07:53 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:54 np0005543037 python3.9[121520]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 20:07:55 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:55 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:07:55 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:56 np0005543037 python3.9[122743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:07:56 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:56 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:07:56 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:57 np0005543037 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  2 20:07:57 np0005543037 systemd[1]: Finished man-db-cache-update.service.
Dec  2 20:07:57 np0005543037 systemd[1]: man-db-cache-update.service: Consumed 13.821s CPU time.
Dec  2 20:07:57 np0005543037 systemd[1]: run-r42de0716c2164488920e24670160307f.service: Deactivated successfully.
Dec  2 20:07:57 np0005543037 python3.9[123633]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:07:57 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:57 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:07:57 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:58 np0005543037 python3.9[123823]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:07:59 np0005543037 systemd[1]: Reloading.
Dec  2 20:07:59 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:07:59 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:08:00 np0005543037 python3.9[124013]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:01 np0005543037 python3.9[124168]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:01 np0005543037 systemd[1]: Reloading.
Dec  2 20:08:01 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:08:01 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:08:02 np0005543037 python3.9[124358]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  2 20:08:02 np0005543037 systemd[1]: Reloading.
Dec  2 20:08:02 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:08:02 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:08:03 np0005543037 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  2 20:08:03 np0005543037 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  2 20:08:04 np0005543037 python3.9[124551]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:04 np0005543037 podman[124654]: 2025-12-03 01:08:04.932136646 +0000 UTC m=+0.182956403 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 20:08:05 np0005543037 python3.9[124729]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:07 np0005543037 python3.9[124888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:08 np0005543037 python3.9[125043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:09 np0005543037 python3.9[125198]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:10 np0005543037 python3.9[125353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:11 np0005543037 python3.9[125508]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:12 np0005543037 python3.9[125663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:13 np0005543037 python3.9[125818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:15 np0005543037 python3.9[125973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:16 np0005543037 python3.9[126128]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:17 np0005543037 python3.9[126283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:19 np0005543037 python3.9[126439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:20 np0005543037 python3.9[126594]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  2 20:08:21 np0005543037 python3.9[126749]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:08:22 np0005543037 python3.9[126901]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:08:23 np0005543037 python3.9[127053]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:08:24 np0005543037 python3.9[127205]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:08:25 np0005543037 python3.9[127357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:08:25 np0005543037 python3.9[127509]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:08:27 np0005543037 python3.9[127661]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:28 np0005543037 python3.9[127786]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724106.1147103-554-64980507818704/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:28 np0005543037 python3.9[127938]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:29 np0005543037 python3.9[128063]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724108.264661-554-203054879063040/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:30 np0005543037 python3.9[128215]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:31 np0005543037 python3.9[128340]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724109.8201556-554-237003242366092/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:32 np0005543037 python3.9[128492]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:32 np0005543037 python3.9[128617]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724111.4959571-554-136141935862436/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:33 np0005543037 python3.9[128769]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:34 np0005543037 python3.9[128894]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724113.0482216-554-86244142630330/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:35 np0005543037 podman[129018]: 2025-12-03 01:08:35.249076938 +0000 UTC m=+0.132170509 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 20:08:35 np0005543037 python3.9[129065]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:36 np0005543037 python3.9[129198]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724114.6510894-554-208574331546034/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:36 np0005543037 python3.9[129350]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:37 np0005543037 python3.9[129473]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724116.2701998-554-188144601487620/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:38 np0005543037 python3.9[129625]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:39 np0005543037 python3.9[129750]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764724117.7536788-554-242960495565794/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:40 np0005543037 python3.9[129902]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  2 20:08:41 np0005543037 python3.9[130055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:41 np0005543037 python3.9[130207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:42 np0005543037 python3.9[130359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:43 np0005543037 python3.9[130511]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:44 np0005543037 python3.9[130663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:45 np0005543037 python3.9[130815]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:46 np0005543037 python3.9[130967]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:47 np0005543037 python3.9[131119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:47 np0005543037 python3.9[131271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:48 np0005543037 python3.9[131423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:49 np0005543037 python3.9[131575]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:50 np0005543037 python3.9[131727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:51 np0005543037 python3.9[131879]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:51 np0005543037 python3.9[132031]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:52 np0005543037 python3.9[132183]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:53 np0005543037 python3.9[132306]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724132.270563-775-281344316964294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:54 np0005543037 python3.9[132458]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:55 np0005543037 python3.9[132581]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724133.836852-775-17713208219975/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:55 np0005543037 python3.9[132733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:56 np0005543037 python3.9[132856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724135.3225746-775-180398792079461/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:57 np0005543037 python3.9[133008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:58 np0005543037 python3.9[133131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724136.8637676-775-112816617984071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:08:58 np0005543037 python3.9[133283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:08:59 np0005543037 python3.9[133406]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724138.3357427-775-277679050298585/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:00 np0005543037 python3.9[133558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:01 np0005543037 python3.9[133681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724139.8584023-775-83627938985654/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:01 np0005543037 python3.9[133833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:02 np0005543037 python3.9[133956]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724141.324325-775-96386516564849/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:03 np0005543037 python3.9[134108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:04 np0005543037 python3.9[134231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724142.8382962-775-136486080407141/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:04 np0005543037 python3.9[134383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:05 np0005543037 podman[134506]: 2025-12-03 01:09:05.47859816 +0000 UTC m=+0.150905217 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 20:09:05 np0005543037 python3.9[134507]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724144.2625206-775-101879579551709/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:06 np0005543037 python3.9[134684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:06 np0005543037 python3.9[134807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724145.6870432-775-197444344919079/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:07 np0005543037 python3.9[134959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:08 np0005543037 python3.9[135082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724147.1335928-775-18919158659108/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:09 np0005543037 python3.9[135234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:09 np0005543037 python3.9[135357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724148.5678654-775-212599326249677/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:10 np0005543037 python3.9[135509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:11 np0005543037 python3.9[135632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724150.0798807-775-243042836051934/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:12 np0005543037 python3.9[135784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:12 np0005543037 python3.9[135907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724151.56528-775-33823100341130/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:13 np0005543037 python3.9[136057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:09:14 np0005543037 python3.9[136212]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  2 20:09:16 np0005543037 dbus-broker-launch[785]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  2 20:09:16 np0005543037 python3.9[136368]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:17 np0005543037 python3.9[136520]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:18 np0005543037 python3.9[136672]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:19 np0005543037 python3.9[136824]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:19 np0005543037 python3.9[136976]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:20 np0005543037 python3.9[137128]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:21 np0005543037 python3.9[137280]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:22 np0005543037 python3.9[137432]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:23 np0005543037 python3.9[137584]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:24 np0005543037 python3.9[137736]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:25 np0005543037 python3.9[137888]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:09:25 np0005543037 systemd[1]: Reloading.
Dec  2 20:09:25 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:09:25 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:09:25 np0005543037 systemd[1]: Starting libvirt logging daemon socket...
Dec  2 20:09:25 np0005543037 systemd[1]: Listening on libvirt logging daemon socket.
Dec  2 20:09:25 np0005543037 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  2 20:09:25 np0005543037 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  2 20:09:25 np0005543037 systemd[1]: Starting libvirt logging daemon...
Dec  2 20:09:25 np0005543037 systemd[1]: Started libvirt logging daemon.
Dec  2 20:09:26 np0005543037 python3.9[138082]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:09:26 np0005543037 systemd[1]: Reloading.
Dec  2 20:09:26 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:09:26 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:09:27 np0005543037 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  2 20:09:27 np0005543037 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  2 20:09:27 np0005543037 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  2 20:09:27 np0005543037 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  2 20:09:27 np0005543037 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  2 20:09:27 np0005543037 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  2 20:09:27 np0005543037 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  2 20:09:27 np0005543037 systemd[1]: Starting libvirt nodedev daemon...
Dec  2 20:09:27 np0005543037 systemd[1]: Started libvirt nodedev daemon.
Dec  2 20:09:27 np0005543037 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  2 20:09:27 np0005543037 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  2 20:09:27 np0005543037 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  2 20:09:28 np0005543037 python3.9[138305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:09:28 np0005543037 systemd[1]: Reloading.
Dec  2 20:09:28 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:09:28 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:09:28 np0005543037 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  2 20:09:28 np0005543037 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  2 20:09:28 np0005543037 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  2 20:09:28 np0005543037 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  2 20:09:28 np0005543037 systemd[1]: Starting libvirt proxy daemon...
Dec  2 20:09:28 np0005543037 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 45987a61-d46d-4d61-a1f4-80217b511162
Dec  2 20:09:28 np0005543037 systemd[1]: Started libvirt proxy daemon.
Dec  2 20:09:28 np0005543037 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  2 20:09:28 np0005543037 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 45987a61-d46d-4d61-a1f4-80217b511162
Dec  2 20:09:28 np0005543037 setroubleshoot[138118]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  2 20:09:29 np0005543037 python3.9[138518]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:09:29 np0005543037 systemd[1]: Reloading.
Dec  2 20:09:29 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:09:29 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:09:29 np0005543037 systemd[1]: Listening on libvirt locking daemon socket.
Dec  2 20:09:29 np0005543037 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  2 20:09:29 np0005543037 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  2 20:09:29 np0005543037 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  2 20:09:29 np0005543037 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  2 20:09:29 np0005543037 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  2 20:09:29 np0005543037 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  2 20:09:29 np0005543037 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  2 20:09:29 np0005543037 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  2 20:09:29 np0005543037 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  2 20:09:29 np0005543037 systemd[1]: Starting libvirt QEMU daemon...
Dec  2 20:09:29 np0005543037 systemd[1]: Started libvirt QEMU daemon.
Dec  2 20:09:30 np0005543037 python3.9[138733]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:09:30 np0005543037 systemd[1]: Reloading.
Dec  2 20:09:31 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:09:31 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:09:31 np0005543037 systemd[1]: Starting libvirt secret daemon socket...
Dec  2 20:09:31 np0005543037 systemd[1]: Listening on libvirt secret daemon socket.
Dec  2 20:09:31 np0005543037 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  2 20:09:31 np0005543037 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  2 20:09:31 np0005543037 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  2 20:09:31 np0005543037 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  2 20:09:31 np0005543037 systemd[1]: Starting libvirt secret daemon...
Dec  2 20:09:31 np0005543037 systemd[1]: Started libvirt secret daemon.
Dec  2 20:09:32 np0005543037 python3.9[138945]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:33 np0005543037 python3.9[139097]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 20:09:34 np0005543037 python3.9[139249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:35 np0005543037 python3.9[139372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724173.760765-1120-203147377214047/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:35 np0005543037 podman[139472]: 2025-12-03 01:09:35.853912165 +0000 UTC m=+0.111474224 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  2 20:09:36 np0005543037 python3.9[139550]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:36 np0005543037 python3.9[139702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:37 np0005543037 python3.9[139780]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:38 np0005543037 python3.9[139932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:38 np0005543037 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  2 20:09:38 np0005543037 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.017s CPU time.
Dec  2 20:09:38 np0005543037 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  2 20:09:38 np0005543037 python3.9[140010]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xj52i7t5 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:39 np0005543037 python3.9[140162]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:40 np0005543037 python3.9[140240]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:41 np0005543037 python3.9[140392]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:09:42 np0005543037 python3[140545]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 20:09:43 np0005543037 python3.9[140697]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:43 np0005543037 python3.9[140775]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:44 np0005543037 python3.9[140927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:45 np0005543037 python3.9[141005]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:46 np0005543037 python3.9[141157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:46 np0005543037 python3.9[141235]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:47 np0005543037 python3.9[141387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:48 np0005543037 python3.9[141465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:49 np0005543037 python3.9[141617]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:49 np0005543037 python3.9[141742]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724188.476935-1245-49384853960312/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:50 np0005543037 python3.9[141894]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:51 np0005543037 python3.9[142046]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:09:52 np0005543037 python3.9[142201]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:53 np0005543037 python3.9[142353]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:09:54 np0005543037 python3.9[142506]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:09:55 np0005543037 python3.9[142660]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:09:55 np0005543037 python3.9[142815]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:56 np0005543037 python3.9[142967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:57 np0005543037 python3.9[143090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724196.3381162-1317-173719425326966/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:09:58 np0005543037 python3.9[143242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:09:59 np0005543037 python3.9[143365]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724197.9343765-1332-13288238363288/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:00 np0005543037 python3.9[143517]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:00 np0005543037 python3.9[143640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724199.5368965-1347-164659593333833/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:01 np0005543037 python3.9[143792]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:10:01 np0005543037 systemd[1]: Reloading.
Dec  2 20:10:01 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:10:01 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:10:02 np0005543037 systemd[1]: Reached target edpm_libvirt.target.
Dec  2 20:10:03 np0005543037 python3.9[143983]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  2 20:10:03 np0005543037 systemd[1]: Reloading.
Dec  2 20:10:03 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:10:03 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:10:03 np0005543037 systemd[1]: Reloading.
Dec  2 20:10:03 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:10:03 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:10:04 np0005543037 systemd[1]: session-20.scope: Deactivated successfully.
Dec  2 20:10:04 np0005543037 systemd[1]: session-20.scope: Consumed 4min 4.020s CPU time.
Dec  2 20:10:04 np0005543037 systemd-logind[800]: Session 20 logged out. Waiting for processes to exit.
Dec  2 20:10:04 np0005543037 systemd-logind[800]: Removed session 20.
Dec  2 20:10:06 np0005543037 podman[144080]: 2025-12-03 01:10:06.94636104 +0000 UTC m=+0.197219202 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 20:10:10 np0005543037 systemd-logind[800]: New session 21 of user zuul.
Dec  2 20:10:10 np0005543037 systemd[1]: Started Session 21 of User zuul.
Dec  2 20:10:11 np0005543037 python3.9[144260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:10:13 np0005543037 python3.9[144416]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:10:13 np0005543037 systemd[1]: Reloading.
Dec  2 20:10:13 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:10:13 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:10:15 np0005543037 python3.9[144600]: ansible-ansible.builtin.service_facts Invoked
Dec  2 20:10:15 np0005543037 network[144617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 20:10:15 np0005543037 network[144618]: 'network-scripts' will be removed from distribution in near future.
Dec  2 20:10:15 np0005543037 network[144619]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 20:10:21 np0005543037 python3.9[144891]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:10:22 np0005543037 python3.9[145044]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:23 np0005543037 python3.9[145196]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:24 np0005543037 python3.9[145348]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:10:25 np0005543037 python3.9[145500]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 20:10:26 np0005543037 python3.9[145652]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:10:26 np0005543037 systemd[1]: Reloading.
Dec  2 20:10:26 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:10:26 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:10:27 np0005543037 python3.9[145840]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:10:28 np0005543037 python3.9[145993]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:10:29 np0005543037 python3.9[146143]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:10:30 np0005543037 python3.9[146295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:30 np0005543037 python3.9[146416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724229.4501264-133-10094918554559/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:10:31 np0005543037 python3.9[146568]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  2 20:10:33 np0005543037 python3.9[146720]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  2 20:10:34 np0005543037 python3.9[146873]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  2 20:10:35 np0005543037 python3.9[147031]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  2 20:10:36 np0005543037 python3.9[147189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:37 np0005543037 podman[147284]: 2025-12-03 01:10:37.254893422 +0000 UTC m=+0.141503633 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  2 20:10:37 np0005543037 python3.9[147323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724236.1263049-201-45576077124287/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:38 np0005543037 python3.9[147487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:38 np0005543037 python3.9[147608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724237.563031-201-26509365525789/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:39 np0005543037 python3.9[147758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:40 np0005543037 python3.9[147879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724239.158092-201-64655868279686/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:41 np0005543037 python3.9[148029]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:10:41 np0005543037 python3.9[148181]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:10:42 np0005543037 python3.9[148333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:43 np0005543037 python3.9[148454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724242.2372503-260-94435110314242/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:44 np0005543037 python3.9[148604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:44 np0005543037 python3.9[148680]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:45 np0005543037 python3.9[148830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:46 np0005543037 python3.9[148951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724245.0575614-260-268496560557249/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:47 np0005543037 python3.9[149101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:47 np0005543037 python3.9[149222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724246.6965408-260-227293744136339/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:48 np0005543037 python3.9[149372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:49 np0005543037 python3.9[149493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724248.206519-260-135852601045186/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:50 np0005543037 python3.9[149643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:50 np0005543037 python3.9[149764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724249.5579848-260-220429034871979/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:51 np0005543037 python3.9[149914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:52 np0005543037 python3.9[150035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724251.1041336-260-147737679316964/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:53 np0005543037 python3.9[150185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:53 np0005543037 python3.9[150306]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724252.455809-260-162574920611845/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:54 np0005543037 python3.9[150456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:55 np0005543037 python3.9[150577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724253.8739967-260-188606570879260/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:55 np0005543037 python3.9[150727]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:56 np0005543037 python3.9[150848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724255.3754954-260-107486637468523/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:57 np0005543037 python3.9[150998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:58 np0005543037 python3.9[151119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724256.8346415-260-79351533927252/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:10:59 np0005543037 python3.9[151269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:10:59 np0005543037 python3.9[151345]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:00 np0005543037 python3.9[151495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:11:00 np0005543037 python3.9[151571]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:01 np0005543037 python3.9[151721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:11:02 np0005543037 python3.9[151797]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:03 np0005543037 python3.9[151949]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:04 np0005543037 python3.9[152101]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:05 np0005543037 python3.9[152253]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:11:05 np0005543037 python3.9[152405]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:11:06 np0005543037 systemd[1]: Reloading.
Dec  2 20:11:06 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:11:06 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:11:06 np0005543037 systemd[1]: Listening on Podman API Socket.
Dec  2 20:11:07 np0005543037 python3.9[152596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:11:07 np0005543037 podman[152684]: 2025-12-03 01:11:07.853171533 +0000 UTC m=+0.112522562 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 20:11:08 np0005543037 python3.9[152744]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:11:08 np0005543037 python3.9[152822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:11:09 np0005543037 python3.9[152945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724266.7505462-482-154463817992616/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:11:10 np0005543037 python3.9[153097]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  2 20:11:11 np0005543037 python3.9[153249]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 20:11:13 np0005543037 python3[153401]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 20:11:27 np0005543037 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  2 20:11:28 np0005543037 podman[153413]: 2025-12-03 01:11:28.476004735 +0000 UTC m=+15.338801640 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  2 20:11:28 np0005543037 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  2 20:11:28 np0005543037 podman[153557]: 2025-12-03 01:11:28.713208496 +0000 UTC m=+0.074717093 container create 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  2 20:11:28 np0005543037 podman[153557]: 2025-12-03 01:11:28.674822642 +0000 UTC m=+0.036331289 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  2 20:11:28 np0005543037 python3[153401]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  2 20:11:29 np0005543037 python3.9[153749]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:11:30 np0005543037 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  2 20:11:30 np0005543037 python3.9[153905]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:31 np0005543037 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  2 20:11:31 np0005543037 python3.9[154057]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724290.8544693-546-241107542865356/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:32 np0005543037 python3.9[154133]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:11:32 np0005543037 systemd[1]: Reloading.
Dec  2 20:11:32 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:11:32 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:11:33 np0005543037 python3.9[154245]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:11:33 np0005543037 systemd[1]: Reloading.
Dec  2 20:11:33 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:11:33 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:11:34 np0005543037 systemd[1]: Starting ceilometer_agent_compute container...
Dec  2 20:11:34 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:11:34 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:34 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:34 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:34 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:34 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec  2 20:11:34 np0005543037 podman[154285]: 2025-12-03 01:11:34.303196733 +0000 UTC m=+0.219371316 container init 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm)
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + sudo -E kolla_set_configs
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: sudo: unable to send audit message: Operation not permitted
Dec  2 20:11:34 np0005543037 podman[154285]: 2025-12-03 01:11:34.351481347 +0000 UTC m=+0.267655870 container start 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  2 20:11:34 np0005543037 podman[154285]: ceilometer_agent_compute
Dec  2 20:11:34 np0005543037 systemd[1]: Started ceilometer_agent_compute container.
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Validating config file
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Copying service configuration files
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: INFO:__main__:Writing out command to execute
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: ++ cat /run_command
Dec  2 20:11:34 np0005543037 podman[154307]: 2025-12-03 01:11:34.445779158 +0000 UTC m=+0.078676468 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + ARGS=
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + sudo kolla_copy_cacerts
Dec  2 20:11:34 np0005543037 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-597a351873c464a5.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:11:34 np0005543037 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-597a351873c464a5.service: Failed with result 'exit-code'.
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: sudo: unable to send audit message: Operation not permitted
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + [[ ! -n '' ]]
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + . kolla_extend_start
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + umask 0022
Dec  2 20:11:34 np0005543037 ceilometer_agent_compute[154300]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.242 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.243 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.244 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.245 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.246 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.247 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.248 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.249 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.250 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.251 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.252 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.253 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.254 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.254 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.254 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.273 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.274 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.274 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.275 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.276 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.277 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.278 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.279 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.280 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.281 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.282 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.283 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.289 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.290 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.291 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.292 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.293 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.293 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.294 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.296 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.297 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  2 20:11:35 np0005543037 python3.9[154484]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:11:35 np0005543037 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.469 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.469 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  2 20:11:35 np0005543037 systemd[1]: Starting libvirt QEMU daemon...
Dec  2 20:11:35 np0005543037 systemd[1]: Started libvirt QEMU daemon.
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.571 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.571 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.580 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.582 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.582 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.582 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.783 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.784 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.785 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.786 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.787 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.788 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.789 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.790 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.791 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.792 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.793 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.794 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.795 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.796 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.797 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.798 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.799 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.800 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.801 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.802 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.803 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.804 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.805 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.806 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.807 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.808 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.808 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  2 20:11:35 np0005543037 virtqemud[154511]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  2 20:11:35 np0005543037 virtqemud[154511]: hostname: compute-0
Dec  2 20:11:35 np0005543037 virtqemud[154511]: End of file while reading data: Input/output error
Dec  2 20:11:35 np0005543037 ceilometer_agent_compute[154300]: 2025-12-03 01:11:35.821 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  2 20:11:36 np0005543037 systemd[1]: libpod-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec  2 20:11:36 np0005543037 systemd[1]: libpod-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Consumed 1.537s CPU time.
Dec  2 20:11:36 np0005543037 podman[154496]: 2025-12-03 01:11:36.012749408 +0000 UTC m=+0.599149378 container died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  2 20:11:36 np0005543037 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-597a351873c464a5.timer: Deactivated successfully.
Dec  2 20:11:36 np0005543037 systemd[1]: Stopped /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec  2 20:11:36 np0005543037 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-userdata-shm.mount: Deactivated successfully.
Dec  2 20:11:36 np0005543037 systemd[1]: var-lib-containers-storage-overlay-c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121-merged.mount: Deactivated successfully.
Dec  2 20:11:38 np0005543037 podman[154549]: 2025-12-03 01:11:38.395174526 +0000 UTC m=+0.149954450 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Dec  2 20:11:39 np0005543037 podman[154496]: 2025-12-03 01:11:39.501317065 +0000 UTC m=+4.087717035 container cleanup 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  2 20:11:39 np0005543037 podman[154496]: ceilometer_agent_compute
Dec  2 20:11:39 np0005543037 podman[154576]: ceilometer_agent_compute
Dec  2 20:11:39 np0005543037 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  2 20:11:39 np0005543037 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  2 20:11:39 np0005543037 systemd[1]: Starting ceilometer_agent_compute container...
Dec  2 20:11:39 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:11:39 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:39 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:39 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:39 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05fda9c1bf365319581a3b522676c832bdfa8164015757f5238b71ba927c121/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:39 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.
Dec  2 20:11:39 np0005543037 podman[154589]: 2025-12-03 01:11:39.803959097 +0000 UTC m=+0.159214272 container init 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + sudo -E kolla_set_configs
Dec  2 20:11:39 np0005543037 podman[154589]: 2025-12-03 01:11:39.837119923 +0000 UTC m=+0.192375048 container start 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: sudo: unable to send audit message: Operation not permitted
Dec  2 20:11:39 np0005543037 podman[154589]: ceilometer_agent_compute
Dec  2 20:11:39 np0005543037 systemd[1]: Started ceilometer_agent_compute container.
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Validating config file
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Copying service configuration files
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: INFO:__main__:Writing out command to execute
Dec  2 20:11:39 np0005543037 podman[154612]: 2025-12-03 01:11:39.933046073 +0000 UTC m=+0.078263265 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: ++ cat /run_command
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + ARGS=
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + sudo kolla_copy_cacerts
Dec  2 20:11:39 np0005543037 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:11:39 np0005543037 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Failed with result 'exit-code'.
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: sudo: unable to send audit message: Operation not permitted
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + [[ ! -n '' ]]
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + . kolla_extend_start
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + umask 0022
Dec  2 20:11:39 np0005543037 ceilometer_agent_compute[154605]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.700 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.701 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.702 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.703 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.704 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.705 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.708 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.712 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.735 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  2 20:11:40 np0005543037 python3.9[154790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.736 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.737 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.738 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.739 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.740 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.741 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.742 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.743 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.745 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.746 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.747 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.750 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.751 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.756 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.757 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.758 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.760 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.763 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.764 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.767 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.779 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.780 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.780 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.921 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.922 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.923 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.924 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.925 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.926 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.927 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.928 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.929 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.930 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.931 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.932 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.933 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.934 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.935 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.935 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.935 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.938 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.964 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.965 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.965 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.967 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea70b770>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.985 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:11:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:11:41 np0005543037 python3.9[154926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724300.1193957-578-106239911037634/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:11:42 np0005543037 python3.9[155078]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  2 20:11:43 np0005543037 python3.9[155230]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 20:11:44 np0005543037 python3[155382]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 20:11:46 np0005543037 podman[155394]: 2025-12-03 01:11:46.012316838 +0000 UTC m=+1.333773195 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  2 20:11:46 np0005543037 podman[155491]: 2025-12-03 01:11:46.238837161 +0000 UTC m=+0.066195260 container create 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Dec  2 20:11:46 np0005543037 podman[155491]: 2025-12-03 01:11:46.19925407 +0000 UTC m=+0.026612219 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  2 20:11:46 np0005543037 python3[155382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  2 20:11:47 np0005543037 python3.9[155681]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:11:48 np0005543037 python3.9[155835]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:49 np0005543037 python3.9[155986]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724308.4020264-631-194718615764564/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:11:49 np0005543037 python3.9[156062]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:11:49 np0005543037 systemd[1]: Reloading.
Dec  2 20:11:49 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:11:49 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:11:50 np0005543037 python3.9[156172]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:11:51 np0005543037 systemd[1]: Reloading.
Dec  2 20:11:51 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:11:51 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:11:52 np0005543037 systemd[1]: Starting node_exporter container...
Dec  2 20:11:52 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:11:52 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:52 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:52 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.
Dec  2 20:11:52 np0005543037 podman[156212]: 2025-12-03 01:11:52.411709726 +0000 UTC m=+0.183974423 container init 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.434Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.434Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.434Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.436Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=arp
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=bcache
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=bonding
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=cpu
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=edac
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=filefd
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.437Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=netclass
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=netdev
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=netstat
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=nfs
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=nvme
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=softnet
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=systemd
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=xfs
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.438Z caller=node_exporter.go:117 level=info collector=zfs
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.439Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  2 20:11:52 np0005543037 node_exporter[156228]: ts=2025-12-03T01:11:52.440Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  2 20:11:52 np0005543037 podman[156212]: 2025-12-03 01:11:52.453560836 +0000 UTC m=+0.225825503 container start 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 20:11:52 np0005543037 podman[156212]: node_exporter
Dec  2 20:11:52 np0005543037 systemd[1]: Started node_exporter container.
Dec  2 20:11:52 np0005543037 podman[156237]: 2025-12-03 01:11:52.563421379 +0000 UTC m=+0.092677903 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:11:53 np0005543037 python3.9[156413]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:11:53 np0005543037 systemd[1]: Stopping node_exporter container...
Dec  2 20:11:53 np0005543037 systemd[1]: libpod-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec  2 20:11:53 np0005543037 podman[156417]: 2025-12-03 01:11:53.556215339 +0000 UTC m=+0.070532301 container died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 20:11:53 np0005543037 systemd[1]: 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-2874ab3da3df488a.timer: Deactivated successfully.
Dec  2 20:11:53 np0005543037 systemd[1]: Stopped /usr/bin/podman healthcheck run 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.
Dec  2 20:11:53 np0005543037 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb-userdata-shm.mount: Deactivated successfully.
Dec  2 20:11:53 np0005543037 systemd[1]: var-lib-containers-storage-overlay-a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8-merged.mount: Deactivated successfully.
Dec  2 20:11:53 np0005543037 podman[156417]: 2025-12-03 01:11:53.753938347 +0000 UTC m=+0.268255299 container cleanup 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 20:11:53 np0005543037 podman[156417]: node_exporter
Dec  2 20:11:53 np0005543037 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  2 20:11:53 np0005543037 podman[156446]: node_exporter
Dec  2 20:11:53 np0005543037 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  2 20:11:53 np0005543037 systemd[1]: Stopped node_exporter container.
Dec  2 20:11:53 np0005543037 systemd[1]: Starting node_exporter container...
Dec  2 20:11:54 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:11:54 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:54 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a223acd36c294252abbef2129cb869c4b2118341768b302e0db9403ccbec37a8/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:11:54 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.
Dec  2 20:11:54 np0005543037 podman[156459]: 2025-12-03 01:11:54.057185898 +0000 UTC m=+0.150613251 container init 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.074Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.074Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.074Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.075Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.076Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=arp
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=bcache
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=bonding
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=cpu
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=edac
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=filefd
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=netclass
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=netdev
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=netstat
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=nfs
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=nvme
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=softnet
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=systemd
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=xfs
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.077Z caller=node_exporter.go:117 level=info collector=zfs
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.078Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  2 20:11:54 np0005543037 node_exporter[156474]: ts=2025-12-03T01:11:54.079Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  2 20:11:54 np0005543037 podman[156459]: 2025-12-03 01:11:54.094256182 +0000 UTC m=+0.187683515 container start 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:11:54 np0005543037 podman[156459]: node_exporter
Dec  2 20:11:54 np0005543037 systemd[1]: Started node_exporter container.
Dec  2 20:11:54 np0005543037 podman[156483]: 2025-12-03 01:11:54.164828104 +0000 UTC m=+0.063391485 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:11:54 np0005543037 python3.9[156659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:11:55 np0005543037 python3.9[156782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724314.3554714-663-128042525100474/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:11:56 np0005543037 python3.9[156934]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  2 20:11:57 np0005543037 python3.9[157086]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 20:11:58 np0005543037 python3[157238]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 20:12:00 np0005543037 podman[157253]: 2025-12-03 01:12:00.205293922 +0000 UTC m=+1.563265157 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  2 20:12:00 np0005543037 podman[157350]: 2025-12-03 01:12:00.378029353 +0000 UTC m=+0.071734818 container create 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 20:12:00 np0005543037 podman[157350]: 2025-12-03 01:12:00.342394801 +0000 UTC m=+0.036100336 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  2 20:12:00 np0005543037 python3[157238]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  2 20:12:01 np0005543037 python3.9[157539]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:12:02 np0005543037 python3.9[157693]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:03 np0005543037 python3.9[157844]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724322.5425866-716-198375030057364/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:04 np0005543037 python3.9[157920]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:12:04 np0005543037 systemd[1]: Reloading.
Dec  2 20:12:04 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:12:04 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:12:05 np0005543037 python3.9[158031]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:12:05 np0005543037 systemd[1]: Reloading.
Dec  2 20:12:05 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:12:05 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:12:05 np0005543037 systemd[1]: Starting podman_exporter container...
Dec  2 20:12:05 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:12:05 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:05 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:05 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.
Dec  2 20:12:05 np0005543037 podman[158072]: 2025-12-03 01:12:05.717289178 +0000 UTC m=+0.204008060 container init 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 20:12:05 np0005543037 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  2 20:12:05 np0005543037 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  2 20:12:05 np0005543037 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  2 20:12:05 np0005543037 podman_exporter[158087]: ts=2025-12-03T01:12:05.742Z caller=handler.go:105 level=info collector=container
Dec  2 20:12:05 np0005543037 podman[158072]: 2025-12-03 01:12:05.748868716 +0000 UTC m=+0.235587588 container start 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:12:05 np0005543037 podman[158072]: podman_exporter
Dec  2 20:12:05 np0005543037 systemd[1]: Starting Podman API Service...
Dec  2 20:12:05 np0005543037 systemd[1]: Started Podman API Service.
Dec  2 20:12:05 np0005543037 systemd[1]: Started podman_exporter container.
Dec  2 20:12:05 np0005543037 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  2 20:12:05 np0005543037 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Setting parallel job count to 25"
Dec  2 20:12:05 np0005543037 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Using sqlite as database backend"
Dec  2 20:12:05 np0005543037 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  2 20:12:05 np0005543037 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  2 20:12:05 np0005543037 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  2 20:12:05 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:12:05 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  2 20:12:05 np0005543037 podman[158098]: time="2025-12-03T01:12:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:12:05 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:12:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9686 "" "Go-http-client/1.1"
Dec  2 20:12:05 np0005543037 podman_exporter[158087]: ts=2025-12-03T01:12:05.852Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  2 20:12:05 np0005543037 podman_exporter[158087]: ts=2025-12-03T01:12:05.854Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  2 20:12:05 np0005543037 podman_exporter[158087]: ts=2025-12-03T01:12:05.854Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  2 20:12:05 np0005543037 podman[158096]: 2025-12-03 01:12:05.856893463 +0000 UTC m=+0.086398142 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 20:12:05 np0005543037 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-47b3dd630bd3997e.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:12:05 np0005543037 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-47b3dd630bd3997e.service: Failed with result 'exit-code'.
Dec  2 20:12:06 np0005543037 python3.9[158281]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:12:06 np0005543037 systemd[1]: Stopping podman_exporter container...
Dec  2 20:12:06 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:12:05 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  2 20:12:06 np0005543037 systemd[1]: libpod-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec  2 20:12:06 np0005543037 conmon[158087]: conmon 7fad237e83203b5eedaa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope/container/memory.events
Dec  2 20:12:06 np0005543037 podman[158285]: 2025-12-03 01:12:06.9128796 +0000 UTC m=+0.068493199 container died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 20:12:06 np0005543037 systemd[1]: 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-47b3dd630bd3997e.timer: Deactivated successfully.
Dec  2 20:12:06 np0005543037 systemd[1]: Stopped /usr/bin/podman healthcheck run 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.
Dec  2 20:12:06 np0005543037 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a-userdata-shm.mount: Deactivated successfully.
Dec  2 20:12:06 np0005543037 systemd[1]: var-lib-containers-storage-overlay-9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807-merged.mount: Deactivated successfully.
Dec  2 20:12:07 np0005543037 podman[158285]: 2025-12-03 01:12:07.195830194 +0000 UTC m=+0.351443803 container cleanup 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:12:07 np0005543037 podman[158285]: podman_exporter
Dec  2 20:12:07 np0005543037 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  2 20:12:07 np0005543037 podman[158311]: podman_exporter
Dec  2 20:12:07 np0005543037 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  2 20:12:07 np0005543037 systemd[1]: Stopped podman_exporter container.
Dec  2 20:12:07 np0005543037 systemd[1]: Starting podman_exporter container...
Dec  2 20:12:07 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:12:07 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:07 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3af917fa450f54f3298d1356fcb0769645478608c41ec56846e1707f625807/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:07 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.
Dec  2 20:12:07 np0005543037 podman[158324]: 2025-12-03 01:12:07.481699257 +0000 UTC m=+0.155898931 container init 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:12:07 np0005543037 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  2 20:12:07 np0005543037 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  2 20:12:07 np0005543037 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  2 20:12:07 np0005543037 podman_exporter[158339]: ts=2025-12-03T01:12:07.503Z caller=handler.go:105 level=info collector=container
Dec  2 20:12:07 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:12:07 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  2 20:12:07 np0005543037 podman[158098]: time="2025-12-03T01:12:07Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:12:07 np0005543037 podman[158324]: 2025-12-03 01:12:07.51640779 +0000 UTC m=+0.190607474 container start 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 20:12:07 np0005543037 podman[158324]: podman_exporter
Dec  2 20:12:07 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:12:07 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9688 "" "Go-http-client/1.1"
Dec  2 20:12:07 np0005543037 systemd[1]: Started podman_exporter container.
Dec  2 20:12:07 np0005543037 podman_exporter[158339]: ts=2025-12-03T01:12:07.528Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  2 20:12:07 np0005543037 podman_exporter[158339]: ts=2025-12-03T01:12:07.529Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  2 20:12:07 np0005543037 podman_exporter[158339]: ts=2025-12-03T01:12:07.530Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  2 20:12:07 np0005543037 podman[158349]: 2025-12-03 01:12:07.610764333 +0000 UTC m=+0.073254264 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 20:12:08 np0005543037 python3.9[158526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:09 np0005543037 python3.9[158649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724327.8103576-748-143711231949342/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:12:09 np0005543037 podman[158769]: 2025-12-03 01:12:09.895446907 +0000 UTC m=+0.152778716 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 20:12:10 np0005543037 python3.9[158822]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  2 20:12:10 np0005543037 podman[158951]: 2025-12-03 01:12:10.781684273 +0000 UTC m=+0.081530254 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  2 20:12:10 np0005543037 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:12:10 np0005543037 systemd[1]: 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264-2d98b0e3ac0f54e5.service: Failed with result 'exit-code'.
Dec  2 20:12:10 np0005543037 python3.9[158998]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 20:12:12 np0005543037 python3[159150]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 20:12:14 np0005543037 podman[159163]: 2025-12-03 01:12:14.792716262 +0000 UTC m=+2.511051202 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  2 20:12:15 np0005543037 podman[159264]: 2025-12-03 01:12:15.008253961 +0000 UTC m=+0.072023826 container create 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., version=9.6)
Dec  2 20:12:15 np0005543037 podman[159264]: 2025-12-03 01:12:14.97295753 +0000 UTC m=+0.036727435 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  2 20:12:15 np0005543037 python3[159150]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  2 20:12:16 np0005543037 python3.9[159455]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:12:17 np0005543037 python3.9[159609]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:18 np0005543037 python3.9[159760]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724337.143841-801-190947757946186/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:18 np0005543037 python3.9[159836]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:12:18 np0005543037 systemd[1]: Reloading.
Dec  2 20:12:18 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:12:19 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:12:19 np0005543037 python3.9[159947]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:12:19 np0005543037 systemd[1]: Reloading.
Dec  2 20:12:20 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:12:20 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:12:20 np0005543037 systemd[1]: Starting openstack_network_exporter container...
Dec  2 20:12:20 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:12:20 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:20 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:20 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:20 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.
Dec  2 20:12:20 np0005543037 podman[159987]: 2025-12-03 01:12:20.451896593 +0000 UTC m=+0.189400777 container init 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *bridge.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *coverage.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *datapath.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *iface.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *memory.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *ovnnorthd.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *ovn.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *ovsdbserver.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *pmd_perf.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *pmd_rxq.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: INFO    01:12:20 main.go:48: registering *vswitch.Collector
Dec  2 20:12:20 np0005543037 openstack_network_exporter[160003]: NOTICE  01:12:20 main.go:76: listening on https://:9105/metrics
Dec  2 20:12:20 np0005543037 podman[159987]: 2025-12-03 01:12:20.492217507 +0000 UTC m=+0.229721701 container start 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, version=9.6, architecture=x86_64, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  2 20:12:20 np0005543037 podman[159987]: openstack_network_exporter
Dec  2 20:12:20 np0005543037 systemd[1]: Started openstack_network_exporter container.
Dec  2 20:12:20 np0005543037 podman[160013]: 2025-12-03 01:12:20.614160506 +0000 UTC m=+0.106084399 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6)
Dec  2 20:12:21 np0005543037 python3.9[160187]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:12:21 np0005543037 systemd[1]: Stopping openstack_network_exporter container...
Dec  2 20:12:21 np0005543037 systemd[1]: libpod-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec  2 20:12:21 np0005543037 podman[160191]: 2025-12-03 01:12:21.689046466 +0000 UTC m=+0.095620472 container died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec  2 20:12:21 np0005543037 systemd[1]: 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-398a825980b2ed60.timer: Deactivated successfully.
Dec  2 20:12:21 np0005543037 systemd[1]: Stopped /usr/bin/podman healthcheck run 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.
Dec  2 20:12:21 np0005543037 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44-userdata-shm.mount: Deactivated successfully.
Dec  2 20:12:21 np0005543037 systemd[1]: var-lib-containers-storage-overlay-6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e-merged.mount: Deactivated successfully.
Dec  2 20:12:22 np0005543037 podman[160191]: 2025-12-03 01:12:22.702053629 +0000 UTC m=+1.108627575 container cleanup 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 20:12:22 np0005543037 podman[160191]: openstack_network_exporter
Dec  2 20:12:22 np0005543037 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  2 20:12:22 np0005543037 podman[160219]: openstack_network_exporter
Dec  2 20:12:22 np0005543037 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  2 20:12:22 np0005543037 systemd[1]: Stopped openstack_network_exporter container.
Dec  2 20:12:22 np0005543037 systemd[1]: Starting openstack_network_exporter container...
Dec  2 20:12:22 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:12:22 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:22 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:22 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a9fbefe05972c72a9f7a13632a386f66954f2ee389425ed857290e85304a23e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:12:22 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.
Dec  2 20:12:22 np0005543037 podman[160233]: 2025-12-03 01:12:22.939121201 +0000 UTC m=+0.151070214 container init 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *bridge.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *coverage.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *datapath.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *iface.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *memory.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *ovnnorthd.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *ovn.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *ovsdbserver.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *pmd_perf.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *pmd_rxq.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: INFO    01:12:22 main.go:48: registering *vswitch.Collector
Dec  2 20:12:22 np0005543037 openstack_network_exporter[160250]: NOTICE  01:12:22 main.go:76: listening on https://:9105/metrics
Dec  2 20:12:22 np0005543037 podman[160233]: 2025-12-03 01:12:22.976133554 +0000 UTC m=+0.188082527 container start 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Dec  2 20:12:22 np0005543037 podman[160233]: openstack_network_exporter
Dec  2 20:12:22 np0005543037 systemd[1]: Started openstack_network_exporter container.
Dec  2 20:12:23 np0005543037 podman[160260]: 2025-12-03 01:12:23.086836632 +0000 UTC m=+0.094590170 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Dec  2 20:12:23 np0005543037 python3.9[160434]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 20:12:24 np0005543037 podman[160558]: 2025-12-03 01:12:24.686146883 +0000 UTC m=+0.083579197 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 20:12:24 np0005543037 python3.9[160610]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  2 20:12:25 np0005543037 python3.9[160775]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:26 np0005543037 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec  2 20:12:26 np0005543037 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  2 20:12:26 np0005543037 podman[160776]: 2025-12-03 01:12:26.048740402 +0000 UTC m=+0.105303356 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  2 20:12:26 np0005543037 podman[160776]: 2025-12-03 01:12:26.056340113 +0000 UTC m=+0.112903067 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  2 20:12:26 np0005543037 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec  2 20:12:26 np0005543037 python3.9[160961]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:27 np0005543037 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec  2 20:12:27 np0005543037 podman[160962]: 2025-12-03 01:12:27.066895592 +0000 UTC m=+0.107967597 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 20:12:27 np0005543037 podman[160962]: 2025-12-03 01:12:27.102355258 +0000 UTC m=+0.143427213 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  2 20:12:27 np0005543037 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec  2 20:12:27 np0005543037 python3.9[161145]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:28 np0005543037 python3.9[161297]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  2 20:12:29 np0005543037 python3.9[161463]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:29 np0005543037 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec  2 20:12:29 np0005543037 podman[161464]: 2025-12-03 01:12:29.96218814 +0000 UTC m=+0.118514037 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  2 20:12:29 np0005543037 podman[161464]: 2025-12-03 01:12:29.997084538 +0000 UTC m=+0.153410355 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  2 20:12:30 np0005543037 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec  2 20:12:30 np0005543037 python3.9[161649]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:31 np0005543037 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec  2 20:12:31 np0005543037 podman[161650]: 2025-12-03 01:12:31.02157346 +0000 UTC m=+0.107770600 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  2 20:12:31 np0005543037 podman[161650]: 2025-12-03 01:12:31.052047515 +0000 UTC m=+0.138244595 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 20:12:31 np0005543037 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec  2 20:12:31 np0005543037 python3.9[161832]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:32 np0005543037 python3.9[161984]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  2 20:12:33 np0005543037 python3.9[162149]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:33 np0005543037 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec  2 20:12:33 np0005543037 podman[162150]: 2025-12-03 01:12:33.979610092 +0000 UTC m=+0.099323164 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 20:12:34 np0005543037 podman[162150]: 2025-12-03 01:12:34.015156531 +0000 UTC m=+0.134869563 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:12:34 np0005543037 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec  2 20:12:34 np0005543037 python3.9[162331]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:35 np0005543037 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec  2 20:12:35 np0005543037 podman[162332]: 2025-12-03 01:12:35.069473077 +0000 UTC m=+0.107718829 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 20:12:35 np0005543037 podman[162332]: 2025-12-03 01:12:35.105283694 +0000 UTC m=+0.143529446 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 20:12:35 np0005543037 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec  2 20:12:36 np0005543037 python3.9[162515]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:36 np0005543037 python3.9[162667]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  2 20:12:37 np0005543037 podman[162832]: 2025-12-03 01:12:37.791859664 +0000 UTC m=+0.093035842 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 20:12:37 np0005543037 python3.9[162833]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:38 np0005543037 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec  2 20:12:38 np0005543037 podman[162854]: 2025-12-03 01:12:38.032068516 +0000 UTC m=+0.107931517 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:12:38 np0005543037 podman[162854]: 2025-12-03 01:12:38.063493814 +0000 UTC m=+0.139356815 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 20:12:38 np0005543037 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec  2 20:12:38 np0005543037 python3.9[163038]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:39 np0005543037 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec  2 20:12:39 np0005543037 podman[163039]: 2025-12-03 01:12:39.120985746 +0000 UTC m=+0.113978086 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:12:39 np0005543037 podman[163039]: 2025-12-03 01:12:39.153035222 +0000 UTC m=+0.146027492 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 20:12:39 np0005543037 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec  2 20:12:40 np0005543037 python3.9[163222]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:40 np0005543037 podman[163346]: 2025-12-03 01:12:40.825706876 +0000 UTC m=+0.160123696 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 20:12:40 np0005543037 python3.9[163391]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  2 20:12:40 np0005543037 podman[163399]: 2025-12-03 01:12:40.953002443 +0000 UTC m=+0.122210086 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS)
Dec  2 20:12:41 np0005543037 python3.9[163583]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:42 np0005543037 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec  2 20:12:42 np0005543037 podman[163584]: 2025-12-03 01:12:42.036684568 +0000 UTC m=+0.126852016 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Dec  2 20:12:42 np0005543037 podman[163584]: 2025-12-03 01:12:42.048174949 +0000 UTC m=+0.138342357 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git)
Dec  2 20:12:42 np0005543037 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec  2 20:12:43 np0005543037 python3.9[163767]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:12:43 np0005543037 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec  2 20:12:43 np0005543037 podman[163768]: 2025-12-03 01:12:43.135059222 +0000 UTC m=+0.098490054 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  2 20:12:43 np0005543037 podman[163768]: 2025-12-03 01:12:43.167435536 +0000 UTC m=+0.130866358 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, vendor=Red Hat, Inc.)
Dec  2 20:12:43 np0005543037 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec  2 20:12:44 np0005543037 python3.9[163948]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:45 np0005543037 python3.9[164100]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:45 np0005543037 python3.9[164252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:46 np0005543037 python3.9[164375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724365.2886798-1016-103933134709506/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:47 np0005543037 python3.9[164527]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:48 np0005543037 python3.9[164679]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:49 np0005543037 python3.9[164757]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:49 np0005543037 python3.9[164909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:50 np0005543037 python3.9[164987]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.b8uiwgyx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:51 np0005543037 python3.9[165139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:51 np0005543037 python3.9[165217]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:52 np0005543037 python3.9[165369]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:12:53 np0005543037 podman[165494]: 2025-12-03 01:12:53.591588425 +0000 UTC m=+0.105947582 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Dec  2 20:12:53 np0005543037 python3[165540]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 20:12:54 np0005543037 python3.9[165696]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:54 np0005543037 podman[165699]: 2025-12-03 01:12:54.817761232 +0000 UTC m=+0.077413715 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 20:12:55 np0005543037 python3.9[165798]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:56 np0005543037 python3.9[165950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:56 np0005543037 python3.9[166028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:57 np0005543037 python3.9[166180]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:58 np0005543037 python3.9[166258]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:12:59 np0005543037 python3.9[166410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:12:59 np0005543037 python3.9[166488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:00 np0005543037 python3.9[166640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:01 np0005543037 python3.9[166765]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724380.0770705-1141-81232477247573/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:02 np0005543037 python3.9[166917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:03 np0005543037 python3.9[167069]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:13:04 np0005543037 python3.9[167224]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:05 np0005543037 python3.9[167376]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:13:06 np0005543037 python3.9[167529]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:13:07 np0005543037 python3.9[167683]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:13:07 np0005543037 python3.9[167838]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:08 np0005543037 systemd[1]: session-21.scope: Deactivated successfully.
Dec  2 20:13:08 np0005543037 systemd[1]: session-21.scope: Consumed 2min 31.753s CPU time.
Dec  2 20:13:08 np0005543037 systemd-logind[800]: Session 21 logged out. Waiting for processes to exit.
Dec  2 20:13:08 np0005543037 systemd-logind[800]: Removed session 21.
Dec  2 20:13:08 np0005543037 podman[167863]: 2025-12-03 01:13:08.513677833 +0000 UTC m=+0.077014093 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 20:13:08 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 20:13:08 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 20:13:08 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:13:08 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:13:08 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:13:08 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:08 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 20:13:08 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:13:11 np0005543037 podman[167894]: 2025-12-03 01:13:11.80031836 +0000 UTC m=+0.061248262 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  2 20:13:11 np0005543037 podman[167895]: 2025-12-03 01:13:11.869100733 +0000 UTC m=+0.117434323 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  2 20:13:14 np0005543037 systemd-logind[800]: New session 22 of user zuul.
Dec  2 20:13:14 np0005543037 systemd[1]: Started Session 22 of User zuul.
Dec  2 20:13:15 np0005543037 python3.9[168094]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:13:15 np0005543037 systemd[1]: Reloading.
Dec  2 20:13:15 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:13:15 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:13:16 np0005543037 python3.9[168279]: ansible-ansible.builtin.service_facts Invoked
Dec  2 20:13:16 np0005543037 network[168296]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  2 20:13:16 np0005543037 network[168297]: 'network-scripts' will be removed from distribution in near future.
Dec  2 20:13:16 np0005543037 network[168298]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  2 20:13:22 np0005543037 python3.9[168570]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:13:23 np0005543037 python3.9[168723]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:23 np0005543037 podman[168814]: 2025-12-03 01:13:23.853898045 +0000 UTC m=+0.101094226 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public)
Dec  2 20:13:24 np0005543037 python3.9[168898]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:25 np0005543037 podman[169022]: 2025-12-03 01:13:25.192971516 +0000 UTC m=+0.079028389 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:13:25 np0005543037 python3.9[169064]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:13:26 np0005543037 python3.9[169223]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 20:13:27 np0005543037 python3.9[169375]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:13:27 np0005543037 systemd[1]: Reloading.
Dec  2 20:13:27 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:13:27 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:13:28 np0005543037 python3.9[169563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:13:29 np0005543037 python3.9[169716]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:13:29 np0005543037 podman[158098]: time="2025-12-03T01:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:13:29 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec  2 20:13:29 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2143 "" "Go-http-client/1.1"
Dec  2 20:13:30 np0005543037 python3.9[169871]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:13:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:13:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 20:13:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:13:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 20:13:31 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:13:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 20:13:31 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:13:31 np0005543037 python3.9[170023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:32 np0005543037 python3.9[170144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724410.918309-125-116023157648922/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:13:33 np0005543037 python3.9[170296]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  2 20:13:35 np0005543037 python3.9[170447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:35 np0005543037 python3.9[170568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724414.644533-171-178298927822883/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:36 np0005543037 python3.9[170718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:37 np0005543037 python3.9[170839]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724416.0936341-171-274543445144827/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:38 np0005543037 python3.9[170989]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:38 np0005543037 podman[171084]: 2025-12-03 01:13:38.778303356 +0000 UTC m=+0.090148130 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  2 20:13:38 np0005543037 python3.9[171122]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724417.691299-171-280403020519388/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:39 np0005543037 python3.9[171283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:13:40 np0005543037 python3.9[171435]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.964 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.966 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.971 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.972 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e95b2150>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:13:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:13:41 np0005543037 python3.9[171588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:42 np0005543037 podman[171683]: 2025-12-03 01:13:42.024459579 +0000 UTC m=+0.092556897 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 20:13:42 np0005543037 podman[171684]: 2025-12-03 01:13:42.062079003 +0000 UTC m=+0.122744333 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 20:13:42 np0005543037 python3.9[171732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724420.9401445-230-224011119583391/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:42 np0005543037 python3.9[171902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:43 np0005543037 python3.9[171978]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:44 np0005543037 python3.9[172128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:45 np0005543037 python3.9[172249]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724423.7351851-230-166665735895968/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:45 np0005543037 python3.9[172399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:46 np0005543037 python3.9[172520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724425.3299277-230-205940406386415/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:47 np0005543037 python3.9[172670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:48 np0005543037 python3.9[172791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724426.7745686-230-6273744919252/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:48 np0005543037 python3.9[172941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:49 np0005543037 python3.9[173062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724428.2021344-230-208592151178601/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:50 np0005543037 python3.9[173212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:51 np0005543037 python3.9[173288]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:52 np0005543037 python3.9[173440]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:52 np0005543037 python3.9[173592]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:13:53 np0005543037 python3.9[173744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:13:54 np0005543037 podman[173868]: 2025-12-03 01:13:54.52905398 +0000 UTC m=+0.128965058 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, managed_by=edpm_ansible)
Dec  2 20:13:54 np0005543037 python3.9[173911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:55 np0005543037 python3.9[174037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:13:55 np0005543037 podman[174038]: 2025-12-03 01:13:55.452318201 +0000 UTC m=+0.063208104 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:13:55 np0005543037 python3.9[174137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:56 np0005543037 python3.9[174260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724433.9436047-349-72010599689751/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:13:57 np0005543037 python3.9[174412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:13:58 np0005543037 python3.9[174535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764724436.8167927-349-237136512574990/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  2 20:13:59 np0005543037 python3.9[174687]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  2 20:13:59 np0005543037 podman[158098]: time="2025-12-03T01:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:13:59 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec  2 20:13:59 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2153 "" "Go-http-client/1.1"
Dec  2 20:14:00 np0005543037 python3.9[174839]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 20:14:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:14:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:14:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 20:14:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 20:14:01 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:14:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 20:14:01 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:14:01 np0005543037 python3[174991]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 20:14:06 np0005543037 podman[175004]: 2025-12-03 01:14:06.824136886 +0000 UTC m=+4.990111436 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  2 20:14:07 np0005543037 podman[175103]: 2025-12-03 01:14:07.02788863 +0000 UTC m=+0.076489276 container create ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true)
Dec  2 20:14:07 np0005543037 podman[175103]: 2025-12-03 01:14:06.994130773 +0000 UTC m=+0.042731479 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  2 20:14:07 np0005543037 python3[174991]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  2 20:14:08 np0005543037 python3.9[175292]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:14:08 np0005543037 python3.9[175446]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:09 np0005543037 podman[175568]: 2025-12-03 01:14:09.831228953 +0000 UTC m=+0.076917508 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 20:14:10 np0005543037 python3.9[175621]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724449.2859688-427-119060671743697/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:10 np0005543037 python3.9[175697]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:14:10 np0005543037 systemd[1]: Reloading.
Dec  2 20:14:11 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:14:11 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:14:12 np0005543037 python3.9[175808]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:14:12 np0005543037 systemd[1]: Reloading.
Dec  2 20:14:12 np0005543037 podman[175810]: 2025-12-03 01:14:12.282971945 +0000 UTC m=+0.105493919 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  2 20:14:12 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:14:12 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:14:12 np0005543037 podman[175811]: 2025-12-03 01:14:12.363283748 +0000 UTC m=+0.181208483 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  2 20:14:12 np0005543037 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  2 20:14:12 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:14:12 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:12 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:12 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:12 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:12 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec  2 20:14:12 np0005543037 podman[175892]: 2025-12-03 01:14:12.745605489 +0000 UTC m=+0.177149659 container init ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + sudo -E kolla_set_configs
Dec  2 20:14:12 np0005543037 podman[175892]: 2025-12-03 01:14:12.783307036 +0000 UTC m=+0.214851196 container start ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 20:14:12 np0005543037 podman[175892]: ceilometer_agent_ipmi
Dec  2 20:14:12 np0005543037 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Validating config file
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying service configuration files
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: INFO:__main__:Writing out command to execute
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: ++ cat /run_command
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + ARGS=
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + sudo kolla_copy_cacerts
Dec  2 20:14:12 np0005543037 podman[175915]: 2025-12-03 01:14:12.906775889 +0000 UTC m=+0.106558829 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  2 20:14:12 np0005543037 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-445d4fc2e44b3551.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:14:12 np0005543037 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-445d4fc2e44b3551.service: Failed with result 'exit-code'.
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + [[ ! -n '' ]]
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + . kolla_extend_start
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + umask 0022
Dec  2 20:14:12 np0005543037 ceilometer_agent_ipmi[175908]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.697 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.697 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.697 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.698 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.699 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.700 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.701 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.702 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.705 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.708 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.711 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.732 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.734 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.736 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 20:14:13 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:13.842 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpav3hmwj4/privsep.sock']
Dec  2 20:14:13 np0005543037 python3.9[176088]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  2 20:14:14 np0005543037 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.493 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.494 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpav3hmwj4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.379 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.387 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.391 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.391 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.625 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.626 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.627 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.628 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.629 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.630 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.630 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.635 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.635 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.635 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.636 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.637 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.638 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.639 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.640 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.641 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.642 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.642 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.643 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.644 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.645 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.646 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.647 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.648 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.649 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.650 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.651 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.652 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.653 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.654 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.655 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.656 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.657 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.658 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.658 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.658 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.659 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.660 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.660 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.660 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.661 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.661 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.661 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.662 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.663 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.663 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.663 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.664 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.665 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.665 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.665 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.666 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.666 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.666 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.667 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.668 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.669 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.669 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.669 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.670 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.671 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.672 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.672 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.672 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.673 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.674 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.674 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.674 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.675 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.676 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.676 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.676 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.677 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.678 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.679 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.680 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.681 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.682 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.683 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.684 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.685 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.685 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.685 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  2 20:14:14 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:14.688 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  2 20:14:14 np0005543037 python3.9[176251]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  2 20:14:15 np0005543037 python3[176405]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  2 20:14:21 np0005543037 podman[176419]: 2025-12-03 01:14:21.818314263 +0000 UTC m=+5.915580710 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  2 20:14:22 np0005543037 podman[176617]: 2025-12-03 01:14:22.028729154 +0000 UTC m=+0.069981273 container create 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., release=1214.1726694543, version=9.4, config_id=edpm, build-date=2024-09-18T21:23:30, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  2 20:14:22 np0005543037 podman[176617]: 2025-12-03 01:14:21.988503786 +0000 UTC m=+0.029755965 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  2 20:14:22 np0005543037 python3[176405]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  2 20:14:23 np0005543037 python3.9[176807]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:14:24 np0005543037 python3.9[176961]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:24 np0005543037 podman[177064]: 2025-12-03 01:14:24.874840966 +0000 UTC m=+0.132349802 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Dec  2 20:14:25 np0005543037 python3.9[177133]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724464.2897983-489-93043377665584/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:25 np0005543037 python3.9[177209]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  2 20:14:25 np0005543037 systemd[1]: Reloading.
Dec  2 20:14:25 np0005543037 podman[177210]: 2025-12-03 01:14:25.824433766 +0000 UTC m=+0.082838924 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 20:14:25 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:14:25 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:14:27 np0005543037 python3.9[177345]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  2 20:14:27 np0005543037 systemd[1]: Reloading.
Dec  2 20:14:27 np0005543037 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  2 20:14:27 np0005543037 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  2 20:14:28 np0005543037 systemd[1]: Starting kepler container...
Dec  2 20:14:28 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:14:28 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec  2 20:14:28 np0005543037 podman[177393]: 2025-12-03 01:14:28.727451844 +0000 UTC m=+0.659474044 container init 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  2 20:14:28 np0005543037 kepler[177408]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 20:14:28 np0005543037 podman[177393]: 2025-12-03 01:14:28.770571133 +0000 UTC m=+0.702593313 container start 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, config_id=edpm, distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.774057       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.774272       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.774332       1 config.go:295] kernel version: 5.14
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.775209       1 power.go:78] Unable to obtain power, use estimate method
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.775252       1 redfish.go:169] failed to get redfish credential file path
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.775955       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.775975       1 power.go:79] using none to obtain power
Dec  2 20:14:28 np0005543037 kepler[177408]: E1203 01:14:28.776002       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  2 20:14:28 np0005543037 kepler[177408]: E1203 01:14:28.776040       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  2 20:14:28 np0005543037 podman[177393]: kepler
Dec  2 20:14:28 np0005543037 kepler[177408]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 20:14:28 np0005543037 kepler[177408]: I1203 01:14:28.779166       1 exporter.go:84] Number of CPUs: 8
Dec  2 20:14:28 np0005543037 systemd[1]: Started kepler container.
Dec  2 20:14:28 np0005543037 podman[177418]: 2025-12-03 01:14:28.884811087 +0000 UTC m=+0.095694985 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, config_id=edpm, distribution-scope=public, release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0)
Dec  2 20:14:28 np0005543037 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-416ed2eac0816b80.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:14:28 np0005543037 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-416ed2eac0816b80.service: Failed with result 'exit-code'.
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.437410       1 watcher.go:83] Using in cluster k8s config
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.437472       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  2 20:14:29 np0005543037 kepler[177408]: E1203 01:14:29.438090       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.445637       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.445718       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.453807       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.453872       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.467055       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.467115       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.467142       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.479864       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.479920       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.479930       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.479938       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.479948       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.479964       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.480085       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.480128       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.480203       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.480241       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.480447       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  2 20:14:29 np0005543037 kepler[177408]: I1203 01:14:29.481180       1 exporter.go:208] Started Kepler in 707.50669ms
Dec  2 20:14:29 np0005543037 podman[158098]: time="2025-12-03T01:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:14:29 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18539 "" "Go-http-client/1.1"
Dec  2 20:14:29 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2985 "" "Go-http-client/1.1"
Dec  2 20:14:29 np0005543037 python3.9[177602]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:14:29 np0005543037 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  2 20:14:29 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:29.994 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  2 20:14:30 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.097 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  2 20:14:30 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.098 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  2 20:14:30 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.098 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  2 20:14:30 np0005543037 ceilometer_agent_ipmi[175908]: 2025-12-03 01:14:30.113 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  2 20:14:30 np0005543037 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec  2 20:14:30 np0005543037 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Consumed 2.201s CPU time.
Dec  2 20:14:30 np0005543037 podman[177606]: 2025-12-03 01:14:30.273114348 +0000 UTC m=+0.363860095 container died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 20:14:30 np0005543037 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-445d4fc2e44b3551.timer: Deactivated successfully.
Dec  2 20:14:30 np0005543037 systemd[1]: Stopped /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec  2 20:14:30 np0005543037 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-userdata-shm.mount: Deactivated successfully.
Dec  2 20:14:30 np0005543037 systemd[1]: var-lib-containers-storage-overlay-bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889-merged.mount: Deactivated successfully.
Dec  2 20:14:30 np0005543037 podman[177606]: 2025-12-03 01:14:30.611876828 +0000 UTC m=+0.702622565 container cleanup ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 20:14:30 np0005543037 podman[177606]: ceilometer_agent_ipmi
Dec  2 20:14:30 np0005543037 podman[177632]: ceilometer_agent_ipmi
Dec  2 20:14:30 np0005543037 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  2 20:14:30 np0005543037 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  2 20:14:30 np0005543037 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  2 20:14:30 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:14:30 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:30 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:30 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:30 np0005543037 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  2 20:14:30 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec  2 20:14:30 np0005543037 podman[177645]: 2025-12-03 01:14:30.978119848 +0000 UTC m=+0.221705478 container init ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + sudo -E kolla_set_configs
Dec  2 20:14:31 np0005543037 podman[177645]: 2025-12-03 01:14:31.01846145 +0000 UTC m=+0.262047070 container start ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  2 20:14:31 np0005543037 podman[177645]: ceilometer_agent_ipmi
Dec  2 20:14:31 np0005543037 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Validating config file
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying service configuration files
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: INFO:__main__:Writing out command to execute
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: ++ cat /run_command
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + ARGS=
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + sudo kolla_copy_cacerts
Dec  2 20:14:31 np0005543037 podman[177666]: 2025-12-03 01:14:31.146945963 +0000 UTC m=+0.105954793 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  2 20:14:31 np0005543037 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:14:31 np0005543037 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Failed with result 'exit-code'.
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + [[ ! -n '' ]]
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + . kolla_extend_start
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + umask 0022
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  2 20:14:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 20:14:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:14:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:14:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 20:14:31 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:14:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 20:14:31 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.972 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.973 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.974 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.975 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.976 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.977 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.978 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.979 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.980 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.981 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.982 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.983 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:31 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.984 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.985 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:31.986 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.009 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.012 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.014 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.045 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpgdvpqcy1/privsep.sock']
Dec  2 20:14:32 np0005543037 python3.9[177842]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  2 20:14:32 np0005543037 systemd[1]: Stopping kepler container...
Dec  2 20:14:32 np0005543037 kepler[177408]: I1203 01:14:32.490864       1 exporter.go:218] Received shutdown signal
Dec  2 20:14:32 np0005543037 kepler[177408]: I1203 01:14:32.491674       1 exporter.go:226] Exiting...
Dec  2 20:14:32 np0005543037 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec  2 20:14:32 np0005543037 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Consumed 1.010s CPU time.
Dec  2 20:14:32 np0005543037 podman[177854]: 2025-12-03 01:14:32.69627211 +0000 UTC m=+0.292990347 container died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  2 20:14:32 np0005543037 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-416ed2eac0816b80.timer: Deactivated successfully.
Dec  2 20:14:32 np0005543037 systemd[1]: Stopped /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec  2 20:14:32 np0005543037 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-userdata-shm.mount: Deactivated successfully.
Dec  2 20:14:32 np0005543037 systemd[1]: var-lib-containers-storage-overlay-56bb532fcb66b2740ea57176a30adf601274f50f260afcb2d3f32777dc3ac537-merged.mount: Deactivated successfully.
Dec  2 20:14:32 np0005543037 podman[177854]: 2025-12-03 01:14:32.75117191 +0000 UTC m=+0.347890147 container cleanup 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  2 20:14:32 np0005543037 podman[177854]: kepler
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.755 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.756 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgdvpqcy1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.625 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.633 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.638 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.638 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  2 20:14:32 np0005543037 podman[177885]: kepler
Dec  2 20:14:32 np0005543037 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  2 20:14:32 np0005543037 systemd[1]: Stopped kepler container.
Dec  2 20:14:32 np0005543037 systemd[1]: Starting kepler container...
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.855 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.856 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.858 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.858 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.859 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.860 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.860 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.860 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.861 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.861 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.868 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.868 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.868 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.869 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.870 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.871 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.871 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.871 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.872 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.873 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.874 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.875 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.876 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.877 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.878 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.879 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.880 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.881 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.882 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.883 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.884 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.885 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.886 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.887 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.888 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.889 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.890 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.891 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.892 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.893 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.894 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.895 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.896 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.897 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.898 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.899 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.899 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.899 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.900 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.901 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.902 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.903 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.904 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.905 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.906 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.907 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.908 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.908 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.908 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.912 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.912 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.913 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.913 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.913 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  2 20:14:32 np0005543037 ceilometer_agent_ipmi[177659]: 2025-12-03 01:14:32.918 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  2 20:14:32 np0005543037 systemd[1]: Started libcrun container.
Dec  2 20:14:33 np0005543037 systemd[1]: Started /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec  2 20:14:33 np0005543037 podman[177898]: 2025-12-03 01:14:33.023427995 +0000 UTC m=+0.161130350 container init 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec  2 20:14:33 np0005543037 kepler[177915]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 20:14:33 np0005543037 podman[177898]: 2025-12-03 01:14:33.053710334 +0000 UTC m=+0.191412679 container start 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  2 20:14:33 np0005543037 podman[177898]: kepler
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.062674       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.062881       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.062906       1 config.go:295] kernel version: 5.14
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.063625       1 power.go:78] Unable to obtain power, use estimate method
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.063669       1 redfish.go:169] failed to get redfish credential file path
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.064412       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.064451       1 power.go:79] using none to obtain power
Dec  2 20:14:33 np0005543037 kepler[177915]: E1203 01:14:33.064477       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  2 20:14:33 np0005543037 kepler[177915]: E1203 01:14:33.064508       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  2 20:14:33 np0005543037 systemd[1]: Started kepler container.
Dec  2 20:14:33 np0005543037 kepler[177915]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.067864       1 exporter.go:84] Number of CPUs: 8
Dec  2 20:14:33 np0005543037 podman[177925]: 2025-12-03 01:14:33.149191671 +0000 UTC m=+0.083252585 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Dec  2 20:14:33 np0005543037 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:14:33 np0005543037 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Failed with result 'exit-code'.
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.652809       1 watcher.go:83] Using in cluster k8s config
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.652843       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  2 20:14:33 np0005543037 kepler[177915]: E1203 01:14:33.652885       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.660882       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.660944       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.670120       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.670181       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.684613       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.684668       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.684690       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698297       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698351       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698360       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698369       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698379       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698395       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698590       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698648       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698690       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698724       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.698943       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  2 20:14:33 np0005543037 kepler[177915]: I1203 01:14:33.699643       1 exporter.go:208] Started Kepler in 637.374592ms
Dec  2 20:14:33 np0005543037 python3.9[178107]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  2 20:14:35 np0005543037 python3.9[178261]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  2 20:14:36 np0005543037 python3.9[178427]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:37 np0005543037 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec  2 20:14:37 np0005543037 podman[178428]: 2025-12-03 01:14:37.185972792 +0000 UTC m=+0.164867375 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  2 20:14:37 np0005543037 podman[178428]: 2025-12-03 01:14:37.22372537 +0000 UTC m=+0.202619893 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  2 20:14:37 np0005543037 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec  2 20:14:38 np0005543037 python3.9[178609]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:38 np0005543037 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec  2 20:14:38 np0005543037 podman[178610]: 2025-12-03 01:14:38.64904596 +0000 UTC m=+0.151104468 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 20:14:38 np0005543037 podman[178610]: 2025-12-03 01:14:38.683003243 +0000 UTC m=+0.185061701 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  2 20:14:38 np0005543037 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec  2 20:14:39 np0005543037 python3.9[178790]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:40 np0005543037 podman[178914]: 2025-12-03 01:14:40.871463063 +0000 UTC m=+0.126386836 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:14:41 np0005543037 python3.9[178966]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  2 20:14:42 np0005543037 python3.9[179130]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:42 np0005543037 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec  2 20:14:42 np0005543037 podman[179131]: 2025-12-03 01:14:42.46035136 +0000 UTC m=+0.130921633 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  2 20:14:42 np0005543037 podman[179131]: 2025-12-03 01:14:42.495969539 +0000 UTC m=+0.166539752 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  2 20:14:42 np0005543037 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec  2 20:14:42 np0005543037 podman[179163]: 2025-12-03 01:14:42.70640932 +0000 UTC m=+0.110938452 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  2 20:14:42 np0005543037 podman[179164]: 2025-12-03 01:14:42.781514766 +0000 UTC m=+0.182307113 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  2 20:14:43 np0005543037 python3.9[179358]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:43 np0005543037 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec  2 20:14:43 np0005543037 podman[179359]: 2025-12-03 01:14:43.848815466 +0000 UTC m=+0.159766871 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  2 20:14:43 np0005543037 podman[179359]: 2025-12-03 01:14:43.883867689 +0000 UTC m=+0.194819104 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  2 20:14:43 np0005543037 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec  2 20:14:45 np0005543037 python3.9[179540]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:46 np0005543037 python3.9[179694]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  2 20:14:47 np0005543037 python3.9[179859]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:47 np0005543037 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec  2 20:14:47 np0005543037 podman[179860]: 2025-12-03 01:14:47.82066255 +0000 UTC m=+0.157683583 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:14:47 np0005543037 podman[179860]: 2025-12-03 01:14:47.852833527 +0000 UTC m=+0.189854590 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 20:14:47 np0005543037 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec  2 20:14:49 np0005543037 python3.9[180041]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:49 np0005543037 systemd[1]: Started libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope.
Dec  2 20:14:49 np0005543037 podman[180042]: 2025-12-03 01:14:49.194227049 +0000 UTC m=+0.141617405 container exec 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  2 20:14:49 np0005543037 podman[180042]: 2025-12-03 01:14:49.226690794 +0000 UTC m=+0.174081180 container exec_died 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  2 20:14:49 np0005543037 systemd[1]: libpod-conmon-0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb.scope: Deactivated successfully.
Dec  2 20:14:50 np0005543037 python3.9[180225]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:51 np0005543037 python3.9[180377]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  2 20:14:52 np0005543037 python3.9[180542]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:53 np0005543037 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec  2 20:14:53 np0005543037 podman[180543]: 2025-12-03 01:14:53.123460251 +0000 UTC m=+0.149985789 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:14:53 np0005543037 podman[180543]: 2025-12-03 01:14:53.160886351 +0000 UTC m=+0.187411829 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 20:14:53 np0005543037 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec  2 20:14:54 np0005543037 python3.9[180723]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:54 np0005543037 systemd[1]: Started libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope.
Dec  2 20:14:54 np0005543037 podman[180724]: 2025-12-03 01:14:54.457586801 +0000 UTC m=+0.128824893 container exec 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  2 20:14:54 np0005543037 podman[180724]: 2025-12-03 01:14:54.491524129 +0000 UTC m=+0.162762201 container exec_died 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  2 20:14:54 np0005543037 systemd[1]: libpod-conmon-7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a.scope: Deactivated successfully.
Dec  2 20:14:55 np0005543037 podman[180877]: 2025-12-03 01:14:55.406277056 +0000 UTC m=+0.142150980 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Dec  2 20:14:55 np0005543037 python3.9[180922]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:14:56 np0005543037 podman[181051]: 2025-12-03 01:14:56.516050114 +0000 UTC m=+0.110956372 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  2 20:14:56 np0005543037 python3.9[181100]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  2 20:14:58 np0005543037 python3.9[181265]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:58 np0005543037 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec  2 20:14:58 np0005543037 podman[181266]: 2025-12-03 01:14:58.191458573 +0000 UTC m=+0.164084739 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, version=9.6)
Dec  2 20:14:58 np0005543037 podman[181266]: 2025-12-03 01:14:58.226229605 +0000 UTC m=+0.198855721 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Dec  2 20:14:58 np0005543037 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec  2 20:14:59 np0005543037 python3.9[181449]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:14:59 np0005543037 systemd[1]: Started libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope.
Dec  2 20:14:59 np0005543037 podman[181450]: 2025-12-03 01:14:59.687808126 +0000 UTC m=+0.146693972 container exec 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public)
Dec  2 20:14:59 np0005543037 podman[181450]: 2025-12-03 01:14:59.723877217 +0000 UTC m=+0.182763043 container exec_died 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  2 20:14:59 np0005543037 podman[158098]: time="2025-12-03T01:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:14:59 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18535 "" "Go-http-client/1.1"
Dec  2 20:14:59 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2992 "" "Go-http-client/1.1"
Dec  2 20:14:59 np0005543037 systemd[1]: libpod-conmon-3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44.scope: Deactivated successfully.
Dec  2 20:15:00 np0005543037 python3.9[181632]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:15:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:15:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 20:15:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 20:15:01 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:15:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 20:15:01 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:15:01 np0005543037 podman[181756]: 2025-12-03 01:15:01.771204406 +0000 UTC m=+0.108190311 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  2 20:15:01 np0005543037 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Main process exited, code=exited, status=1/FAILURE
Dec  2 20:15:01 np0005543037 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Failed with result 'exit-code'.
Dec  2 20:15:01 np0005543037 python3.9[181805]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  2 20:15:03 np0005543037 python3.9[181970]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:15:03 np0005543037 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec  2 20:15:03 np0005543037 podman[181971]: 2025-12-03 01:15:03.576052363 +0000 UTC m=+0.155822578 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  2 20:15:03 np0005543037 podman[181971]: 2025-12-03 01:15:03.61163717 +0000 UTC m=+0.191407415 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  2 20:15:03 np0005543037 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec  2 20:15:03 np0005543037 podman[181986]: 2025-12-03 01:15:03.753230213 +0000 UTC m=+0.168337223 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, architecture=x86_64, release-0.7.12=, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  2 20:15:04 np0005543037 python3.9[182168]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:15:04 np0005543037 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec  2 20:15:04 np0005543037 podman[182169]: 2025-12-03 01:15:04.895026803 +0000 UTC m=+0.149271658 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  2 20:15:04 np0005543037 podman[182169]: 2025-12-03 01:15:04.928977221 +0000 UTC m=+0.183222086 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  2 20:15:04 np0005543037 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec  2 20:15:05 np0005543037 python3.9[182350]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:07 np0005543037 python3.9[182502]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  2 20:15:08 np0005543037 python3.9[182666]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:15:08 np0005543037 systemd[1]: Started libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope.
Dec  2 20:15:08 np0005543037 podman[182667]: 2025-12-03 01:15:08.564491928 +0000 UTC m=+0.137356220 container exec 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  2 20:15:08 np0005543037 podman[182667]: 2025-12-03 01:15:08.59922331 +0000 UTC m=+0.172087592 container exec_died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9)
Dec  2 20:15:08 np0005543037 systemd[1]: libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec  2 20:15:09 np0005543037 python3.9[182849]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  2 20:15:09 np0005543037 systemd[1]: Started libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope.
Dec  2 20:15:09 np0005543037 podman[182850]: 2025-12-03 01:15:09.976005571 +0000 UTC m=+0.133629322 container exec 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543)
Dec  2 20:15:10 np0005543037 podman[182850]: 2025-12-03 01:15:10.008784676 +0000 UTC m=+0.166408427 container exec_died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc.)
Dec  2 20:15:10 np0005543037 systemd[1]: libpod-conmon-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec  2 20:15:11 np0005543037 python3.9[183029]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:11 np0005543037 podman[183141]: 2025-12-03 01:15:11.876171155 +0000 UTC m=+0.125281949 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 20:15:12 np0005543037 python3.9[183204]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:13 np0005543037 podman[183328]: 2025-12-03 01:15:13.109201161 +0000 UTC m=+0.163531623 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  2 20:15:13 np0005543037 podman[183329]: 2025-12-03 01:15:13.135167687 +0000 UTC m=+0.186976226 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  2 20:15:13 np0005543037 python3.9[183393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:14 np0005543037 python3.9[183521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764724512.4711742-778-96190474094134/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:15 np0005543037 python3.9[183673]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:16 np0005543037 python3.9[183825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:17 np0005543037 python3.9[183903]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:18 np0005543037 python3.9[184055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:19 np0005543037 python3.9[184133]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.umbw37x2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:20 np0005543037 python3.9[184286]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:21 np0005543037 python3.9[184364]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:22 np0005543037 python3.9[184516]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:15:23 np0005543037 python3[184669]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  2 20:15:24 np0005543037 python3.9[184821]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:25 np0005543037 python3.9[184899]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:25 np0005543037 podman[184900]: 2025-12-03 01:15:25.916267727 +0000 UTC m=+0.159288670 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  2 20:15:27 np0005543037 podman[185070]: 2025-12-03 01:15:27.254012773 +0000 UTC m=+0.108073848 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  2 20:15:27 np0005543037 python3.9[185071]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:28 np0005543037 python3.9[185171]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:29 np0005543037 python3.9[185323]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:29 np0005543037 podman[158098]: time="2025-12-03T01:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:15:29 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18534 "" "Go-http-client/1.1"
Dec  2 20:15:29 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2995 "" "Go-http-client/1.1"
Dec  2 20:15:30 np0005543037 python3.9[185401]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:15:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:15:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 20:15:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 20:15:31 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:15:31 np0005543037 openstack_network_exporter[160250]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 20:15:31 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:15:31 np0005543037 python3.9[185553]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:32 np0005543037 podman[185603]: 2025-12-03 01:15:32.243381815 +0000 UTC m=+0.144293363 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  2 20:15:32 np0005543037 python3.9[185650]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:33 np0005543037 python3.9[185802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:15:34 np0005543037 podman[185899]: 2025-12-03 01:15:34.48219488 +0000 UTC m=+0.131122139 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, com.redhat.component=ubi9-container, version=9.4, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Dec  2 20:15:34 np0005543037 python3.9[185947]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764724532.7748115-903-161198752836126/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:35 np0005543037 python3.9[186100]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:36 np0005543037 python3.9[186252]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:15:38 np0005543037 python3.9[186407]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:39 np0005543037 python3.9[186559]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:15:40 np0005543037 python3.9[186712]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.965 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.966 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.966 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00e9581820>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.971 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.986 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.989 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:40 np0005543037 ceilometer_agent_compute[154605]: 2025-12-03 01:15:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  2 20:15:41 np0005543037 python3.9[186867]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  2 20:15:42 np0005543037 podman[186994]: 2025-12-03 01:15:42.496651203 +0000 UTC m=+0.127338009 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  2 20:15:42 np0005543037 python3.9[187044]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:15:43 np0005543037 systemd[1]: session-22.scope: Deactivated successfully.
Dec  2 20:15:43 np0005543037 systemd[1]: session-22.scope: Consumed 2min 12.793s CPU time.
Dec  2 20:15:43 np0005543037 systemd-logind[800]: Session 22 logged out. Waiting for processes to exit.
Dec  2 20:15:43 np0005543037 systemd-logind[800]: Removed session 22.
Dec  2 20:15:43 np0005543037 podman[187069]: 2025-12-03 01:15:43.393295784 +0000 UTC m=+0.129994186 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  2 20:15:43 np0005543037 podman[187070]: 2025-12-03 01:15:43.47247279 +0000 UTC m=+0.207305678 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  2 20:15:49 np0005543037 systemd-logind[800]: New session 23 of user zuul.
Dec  2 20:15:49 np0005543037 systemd[1]: Started Session 23 of User zuul.
Dec  2 20:15:50 np0005543037 python3.9[187270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  2 20:15:52 np0005543037 python3.9[187426]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  2 20:15:53 np0005543037 python3.9[187579]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  2 20:15:54 np0005543037 python3.9[187663]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  2 20:15:56 np0005543037 podman[187665]: 2025-12-03 01:15:56.88025414 +0000 UTC m=+0.135143021 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Dec  2 20:15:57 np0005543037 podman[187687]: 2025-12-03 01:15:57.889261497 +0000 UTC m=+0.146741506 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  2 20:15:59 np0005543037 podman[158098]: time="2025-12-03T01:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  2 20:15:59 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  2 20:15:59 np0005543037 podman[158098]: @ - - [03/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Dec  2 20:16:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:16:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  2 20:16:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  2 20:16:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  2 20:16:01 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:16:01 np0005543037 openstack_network_exporter[160250]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  2 20:16:01 np0005543037 openstack_network_exporter[160250]: 
Dec  2 20:16:02 np0005543037 python3.9[187868]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:16:02 np0005543037 podman[187913]: 2025-12-03 01:16:02.895266868 +0000 UTC m=+0.140119968 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  2 20:16:03 np0005543037 python3.9[188012]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724561.5597394-54-210203165288696/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:16:04 np0005543037 podman[188164]: 2025-12-03 01:16:04.822845121 +0000 UTC m=+0.147826614 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  2 20:16:04 np0005543037 python3.9[188165]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  2 20:16:05 np0005543037 python3.9[188333]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  2 20:16:06 np0005543037 python3.9[188456]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764724565.2082174-77-254176098394733/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:16:08 compute-0 python3.9[188608]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:16:08 compute-0 systemd[1]: Stopping System Logging Service...
Dec  3 01:16:08 compute-0 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  3 01:16:08 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  3 01:16:08 compute-0 systemd[1]: Stopped System Logging Service.
Dec  3 01:16:08 compute-0 systemd[1]: rsyslog.service: Consumed 2.337s CPU time, 5.4M memory peak, read 0B from disk, written 4.0M to disk.
Dec  3 01:16:08 compute-0 systemd[1]: Starting System Logging Service...
Dec  3 01:16:08 compute-0 rsyslogd[188612]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="188612" x-info="https://www.rsyslog.com"] start
Dec  3 01:16:08 compute-0 systemd[1]: Started System Logging Service.
Dec  3 01:16:08 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:16:08 compute-0 rsyslogd[188612]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  3 01:16:08 compute-0 rsyslogd[188612]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  3 01:16:08 compute-0 rsyslogd[188612]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  3 01:16:08 compute-0 rsyslogd[188612]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  3 01:16:09 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec  3 01:16:09 compute-0 systemd[1]: session-23.scope: Consumed 16.635s CPU time.
Dec  3 01:16:09 compute-0 systemd-logind[800]: Session 23 logged out. Waiting for processes to exit.
Dec  3 01:16:09 compute-0 systemd-logind[800]: Removed session 23.
Dec  3 01:16:12 compute-0 podman[188641]: 2025-12-03 01:16:12.881224498 +0000 UTC m=+0.129618070 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:16:13 compute-0 podman[188664]: 2025-12-03 01:16:13.859018854 +0000 UTC m=+0.109244613 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:16:13 compute-0 podman[188665]: 2025-12-03 01:16:13.981894879 +0000 UTC m=+0.226730433 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  3 01:16:17 compute-0 systemd-logind[800]: New session 24 of user zuul.
Dec  3 01:16:17 compute-0 systemd[1]: Started Session 24 of User zuul.
Dec  3 01:16:25 compute-0 python3[189451]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:16:27 compute-0 podman[189530]: 2025-12-03 01:16:27.890292386 +0000 UTC m=+0.137123507 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 01:16:28 compute-0 podman[189556]: 2025-12-03 01:16:28.080789358 +0000 UTC m=+0.127661135 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:16:28 compute-0 python3[189586]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 01:16:29 compute-0 python3[189624]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:16:29 compute-0 podman[158098]: time="2025-12-03T01:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  3 01:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2992 "" "Go-http-client/1.1"
Dec  3 01:16:30 compute-0 python3[189650]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:16:30 compute-0 kernel: loop: module loaded
Dec  3 01:16:30 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec  3 01:16:31 compute-0 python3[189685]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:16:31 compute-0 lvm[189688]: PV /dev/loop3 not used.
Dec  3 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:16:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:16:31 compute-0 openstack_network_exporter[160250]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:16:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:16:31 compute-0 lvm[189697]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 01:16:31 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  3 01:16:31 compute-0 lvm[189699]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  3 01:16:31 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  3 01:16:32 compute-0 python3[189777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:16:32 compute-0 python3[189850]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724591.761974-36743-119521888773527/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:16:33 compute-0 podman[189900]: 2025-12-03 01:16:33.689933448 +0000 UTC m=+0.142512992 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 01:16:33 compute-0 python3[189901]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:16:33 compute-0 systemd[1]: Reloading.
Dec  3 01:16:34 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:16:34 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:16:34 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  3 01:16:34 compute-0 bash[189959]: /dev/loop3: [64513]:4329306 (/var/lib/ceph-osd-0.img)
Dec  3 01:16:34 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  3 01:16:34 compute-0 lvm[189961]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 01:16:34 compute-0 lvm[189961]: VG ceph_vg0 finished
Dec  3 01:16:34 compute-0 python3[189987]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 01:16:35 compute-0 podman[189989]: 2025-12-03 01:16:35.88343676 +0000 UTC m=+0.127667665 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 01:16:36 compute-0 python3[190034]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:16:37 compute-0 python3[190060]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:16:37 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec  3 01:16:37 compute-0 python3[190091]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:16:37 compute-0 lvm[190096]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 01:16:37 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec  3 01:16:37 compute-0 lvm[190103]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec  3 01:16:37 compute-0 lvm[190108]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 01:16:37 compute-0 lvm[190108]: VG ceph_vg1 finished
Dec  3 01:16:37 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec  3 01:16:38 compute-0 python3[190186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:16:39 compute-0 python3[190259]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724598.1719015-36770-116689913042173/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:16:39 compute-0 python3[190309]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:16:40 compute-0 systemd[1]: Reloading.
Dec  3 01:16:40 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:16:40 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:16:40 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  3 01:16:40 compute-0 bash[190348]: /dev/loop4: [64513]:4330089 (/var/lib/ceph-osd-1.img)
Dec  3 01:16:40 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  3 01:16:40 compute-0 lvm[190349]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 01:16:40 compute-0 lvm[190349]: VG ceph_vg1 finished
Dec  3 01:16:41 compute-0 python3[190375]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 01:16:43 compute-0 python3[190403]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:16:43 compute-0 podman[190402]: 2025-12-03 01:16:43.14649894 +0000 UTC m=+0.139452360 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:16:43 compute-0 python3[190451]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:16:43 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec  3 01:16:43 compute-0 python3[190483]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:16:44 compute-0 lvm[190486]: PV /dev/loop5 not used.
Dec  3 01:16:44 compute-0 lvm[190499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 01:16:44 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec  3 01:16:44 compute-0 lvm[190532]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 01:16:44 compute-0 lvm[190532]: VG ceph_vg2 finished
Dec  3 01:16:44 compute-0 lvm[190524]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec  3 01:16:44 compute-0 podman[190488]: 2025-12-03 01:16:44.337266451 +0000 UTC m=+0.157820453 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 01:16:44 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec  3 01:16:44 compute-0 podman[190489]: 2025-12-03 01:16:44.370067725 +0000 UTC m=+0.188611809 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 01:16:45 compute-0 python3[190619]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:16:45 compute-0 python3[190692]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724604.602049-36797-98417805017753/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:16:46 compute-0 python3[190742]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:16:46 compute-0 systemd[1]: Reloading.
Dec  3 01:16:46 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:16:46 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:16:46 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  3 01:16:46 compute-0 bash[190781]: /dev/loop5: [64513]:4362427 (/var/lib/ceph-osd-2.img)
Dec  3 01:16:46 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  3 01:16:47 compute-0 lvm[190783]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 01:16:47 compute-0 lvm[190783]: VG ceph_vg2 finished
Dec  3 01:16:49 compute-0 python3[190807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:16:52 compute-0 python3[190909]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 01:16:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 01:16:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec  3 01:16:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 01:16:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec  3 01:16:54 compute-0 systemd[1]: run-r068cba871ac04d0fbb54351cf90560af.service: Deactivated successfully.
Dec  3 01:16:54 compute-0 python3[191036]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:16:55 compute-0 python3[191065]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:16:56 compute-0 python3[191129]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:16:56 compute-0 python3[191155]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:16:57 compute-0 python3[191233]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:16:58 compute-0 podman[191306]: 2025-12-03 01:16:58.455222166 +0000 UTC m=+0.122975927 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:16:58 compute-0 podman[191307]: 2025-12-03 01:16:58.470113029 +0000 UTC m=+0.134987840 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 01:16:58 compute-0 python3[191308]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724617.4048393-36944-140993472414015/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:16:59 compute-0 python3[191453]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:16:59 compute-0 podman[158098]: time="2025-12-03T01:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  3 01:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2988 "" "Go-http-client/1.1"
Dec  3 01:17:00 compute-0 python3[191526]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724619.1588588-36962-67661698673033/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:17:00 compute-0 python3[191576]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:17:01 compute-0 python3[191604]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:17:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:17:01 compute-0 openstack_network_exporter[160250]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:17:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:17:01 compute-0 python3[191632]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:17:02 compute-0 python3[191660]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:17:02 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec  3 01:17:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  3 01:17:02 compute-0 systemd-logind[800]: New session 25 of user ceph-admin.
Dec  3 01:17:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  3 01:17:02 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec  3 01:17:02 compute-0 systemd[191679]: Queued start job for default target Main User Target.
Dec  3 01:17:02 compute-0 systemd[191679]: Created slice User Application Slice.
Dec  3 01:17:02 compute-0 systemd[191679]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 01:17:02 compute-0 systemd[191679]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 01:17:02 compute-0 systemd[191679]: Reached target Paths.
Dec  3 01:17:02 compute-0 systemd[191679]: Reached target Timers.
Dec  3 01:17:02 compute-0 systemd[191679]: Starting D-Bus User Message Bus Socket...
Dec  3 01:17:02 compute-0 systemd[191679]: Starting Create User's Volatile Files and Directories...
Dec  3 01:17:02 compute-0 systemd[191679]: Listening on D-Bus User Message Bus Socket.
Dec  3 01:17:02 compute-0 systemd[191679]: Finished Create User's Volatile Files and Directories.
Dec  3 01:17:02 compute-0 systemd[191679]: Reached target Sockets.
Dec  3 01:17:02 compute-0 systemd[191679]: Reached target Basic System.
Dec  3 01:17:02 compute-0 systemd[191679]: Reached target Main User Target.
Dec  3 01:17:02 compute-0 systemd[191679]: Startup finished in 204ms.
Dec  3 01:17:02 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec  3 01:17:02 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec  3 01:17:03 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec  3 01:17:03 compute-0 systemd-logind[800]: Session 25 logged out. Waiting for processes to exit.
Dec  3 01:17:03 compute-0 systemd-logind[800]: Removed session 25.
Dec  3 01:17:03 compute-0 podman[191746]: 2025-12-03 01:17:03.898471445 +0000 UTC m=+0.146083961 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  3 01:17:08 compute-0 podman[191788]: 2025-12-03 01:17:08.714457764 +0000 UTC m=+2.717164076 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=edpm)
Dec  3 01:17:13 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec  3 01:17:13 compute-0 systemd[191679]: Activating special unit Exit the Session...
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped target Main User Target.
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped target Basic System.
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped target Paths.
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped target Sockets.
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped target Timers.
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  3 01:17:13 compute-0 systemd[191679]: Closed D-Bus User Message Bus Socket.
Dec  3 01:17:13 compute-0 systemd[191679]: Stopped Create User's Volatile Files and Directories.
Dec  3 01:17:13 compute-0 systemd[191679]: Removed slice User Application Slice.
Dec  3 01:17:13 compute-0 systemd[191679]: Reached target Shutdown.
Dec  3 01:17:13 compute-0 systemd[191679]: Finished Exit the Session.
Dec  3 01:17:13 compute-0 systemd[191679]: Reached target Exit the Session.
Dec  3 01:17:13 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec  3 01:17:13 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec  3 01:17:13 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  3 01:17:13 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  3 01:17:13 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  3 01:17:13 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  3 01:17:13 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec  3 01:17:13 compute-0 podman[191811]: 2025-12-03 01:17:13.577069913 +0000 UTC m=+0.089465717 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:17:29 compute-0 podman[158098]: time="2025-12-03T01:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  3 01:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2998 "" "Go-http-client/1.1"
Dec  3 01:17:29 compute-0 podman[191849]: 2025-12-03 01:17:29.968820356 +0000 UTC m=+15.287711772 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  3 01:17:29 compute-0 podman[191872]: 2025-12-03 01:17:29.97673482 +0000 UTC m=+1.223458401 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:17:29 compute-0 podman[191873]: 2025-12-03 01:17:29.99435353 +0000 UTC m=+1.235308010 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Dec  3 01:17:30 compute-0 podman[191850]: 2025-12-03 01:17:30.007488161 +0000 UTC m=+15.323153639 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:17:30 compute-0 podman[191732]: 2025-12-03 01:17:30.028103875 +0000 UTC m=+26.708038617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.142773798 +0000 UTC m=+0.081638897 container create 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.101784726 +0000 UTC m=+0.040649875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:30 compute-0 systemd[1]: Started libpod-conmon-3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4.scope.
Dec  3 01:17:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.294053765 +0000 UTC m=+0.232918904 container init 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.312375063 +0000 UTC m=+0.251240152 container start 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.318510313 +0000 UTC m=+0.257375472 container attach 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:17:30 compute-0 hungry_jemison[191949]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  3 01:17:30 compute-0 systemd[1]: libpod-3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4.scope: Deactivated successfully.
Dec  3 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.630956879 +0000 UTC m=+0.569821968 container died 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6294dbc3c56cdd30cce0c9a478501fef6a6b15a576bc9eb1e5210954992a091f-merged.mount: Deactivated successfully.
Dec  3 01:17:30 compute-0 podman[191935]: 2025-12-03 01:17:30.70712142 +0000 UTC m=+0.645986519 container remove 3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4 (image=quay.io/ceph/ceph:v18, name=hungry_jemison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:17:30 compute-0 systemd[1]: libpod-conmon-3bbd7b83492151813d2e5f2f379aa08d28b2014b03c337b9a143cfa11352cab4.scope: Deactivated successfully.
Dec  3 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.840392787 +0000 UTC m=+0.088892163 container create e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.809354999 +0000 UTC m=+0.057854405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:30 compute-0 systemd[1]: Started libpod-conmon-e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c.scope.
Dec  3 01:17:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.970858556 +0000 UTC m=+0.219357912 container init e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.987160264 +0000 UTC m=+0.235659630 container start e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:17:30 compute-0 podman[191967]: 2025-12-03 01:17:30.995798455 +0000 UTC m=+0.244297871 container attach e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:17:30 compute-0 kind_blackwell[191983]: 167 167
Dec  3 01:17:30 compute-0 systemd[1]: libpod-e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c.scope: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[191967]: 2025-12-03 01:17:30.999928526 +0000 UTC m=+0.248427902 container died e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7386033a1848d5401956961f907a5533b8065b9f61d6cc5512571f487be8ab-merged.mount: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[191967]: 2025-12-03 01:17:31.08026886 +0000 UTC m=+0.328768236 container remove e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c (image=quay.io/ceph/ceph:v18, name=kind_blackwell, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:17:31 compute-0 systemd[1]: libpod-conmon-e2dfc5115957b1ba372b309b46e56db59725f6e2413fc506ab2971bc9adbb94c.scope: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.216214862 +0000 UTC m=+0.087610322 container create 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:17:31 compute-0 systemd[1]: Started libpod-conmon-95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf.scope.
Dec  3 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.17767131 +0000 UTC m=+0.049066820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.335479557 +0000 UTC m=+0.206875007 container init 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.348774112 +0000 UTC m=+0.220169552 container start 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.353456646 +0000 UTC m=+0.224852086 container attach 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:31 compute-0 hopeful_swanson[192014]: AQCrjy9p4Tj3FhAA04NDm/ejqJUoebszWMI02w==
Dec  3 01:17:31 compute-0 systemd[1]: libpod-95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf.scope: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.392351367 +0000 UTC m=+0.263746807 container died 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:17:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:17:31 compute-0 openstack_network_exporter[160250]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:17:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-297534cd7e92263766f9f007292d37b76121512c61540fbefad2e0708dd6fc2c-merged.mount: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[191998]: 2025-12-03 01:17:31.456595377 +0000 UTC m=+0.327990827 container remove 95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf (image=quay.io/ceph/ceph:v18, name=hopeful_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:17:31 compute-0 systemd[1]: libpod-conmon-95b4ee40e1798ed77134814a83443e6d62533eec572efe0c0d35990926f99edf.scope: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.55820091 +0000 UTC m=+0.064990909 container create 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.527861969 +0000 UTC m=+0.034652048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:31 compute-0 systemd[1]: Started libpod-conmon-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope.
Dec  3 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.690300619 +0000 UTC m=+0.197090718 container init 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.704851744 +0000 UTC m=+0.211641763 container start 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.712319087 +0000 UTC m=+0.219109086 container attach 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:17:31 compute-0 elastic_lehmann[192048]: AQCrjy9pDu9ELBAASAza0/QFMisisbtSvlcmew==
Dec  3 01:17:31 compute-0 systemd[1]: libpod-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope: Deactivated successfully.
Dec  3 01:17:31 compute-0 conmon[192048]: conmon 64618abe2e218e7e158a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope/container/memory.events
Dec  3 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.751460603 +0000 UTC m=+0.258250602 container died 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6488d519b588c2d2d5c872d821bc9d43c841eb421b6ae0066171a97c32272b50-merged.mount: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[192032]: 2025-12-03 01:17:31.829343577 +0000 UTC m=+0.336133576 container remove 64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8 (image=quay.io/ceph/ceph:v18, name=elastic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:31 compute-0 systemd[1]: libpod-conmon-64618abe2e218e7e158a11eb8b8aa137d96d95b3a8b831f0ccc0f9927bcb1cb8.scope: Deactivated successfully.
Dec  3 01:17:31 compute-0 podman[192066]: 2025-12-03 01:17:31.947778301 +0000 UTC m=+0.081608075 container create 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:31.911806932 +0000 UTC m=+0.045636756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:32 compute-0 systemd[1]: Started libpod-conmon-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope.
Dec  3 01:17:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.081037958 +0000 UTC m=+0.214867762 container init 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.095364148 +0000 UTC m=+0.229193912 container start 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.103327033 +0000 UTC m=+0.237156807 container attach 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec  3 01:17:32 compute-0 intelligent_colden[192082]: AQCsjy9pqOAbCBAA0cRskAnphB81b+KjIn6MAw==
Dec  3 01:17:32 compute-0 systemd[1]: libpod-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope: Deactivated successfully.
Dec  3 01:17:32 compute-0 conmon[192082]: conmon 7aed69c8dcf77fbf99d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope/container/memory.events
Dec  3 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.143983166 +0000 UTC m=+0.277812910 container died 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:32 compute-0 podman[192066]: 2025-12-03 01:17:32.217302447 +0000 UTC m=+0.351132191 container remove 7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4 (image=quay.io/ceph/ceph:v18, name=intelligent_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b57f19b14275ce4b41cf11d1548d063131b74c4d2d2b9fdf94338086279442b8-merged.mount: Deactivated successfully.
Dec  3 01:17:32 compute-0 systemd[1]: libpod-conmon-7aed69c8dcf77fbf99d57e57c44f8e2eb596ec44b88c12a610eabfbdca5bf3d4.scope: Deactivated successfully.
Dec  3 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.324751813 +0000 UTC m=+0.077803392 container create b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.283951016 +0000 UTC m=+0.037002605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:32 compute-0 systemd[1]: Started libpod-conmon-b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d.scope.
Dec  3 01:17:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958752b5be4e6252bdea2658ce49e12ba65b7eb8d984288dcdc4dcc674cb625b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.468397994 +0000 UTC m=+0.221449663 container init b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.48296473 +0000 UTC m=+0.236016299 container start b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec  3 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.489948921 +0000 UTC m=+0.243000540 container attach b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:17:32 compute-0 recursing_rhodes[192116]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  3 01:17:32 compute-0 recursing_rhodes[192116]: setting min_mon_release = pacific
Dec  3 01:17:32 compute-0 recursing_rhodes[192116]: /usr/bin/monmaptool: set fsid to 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:32 compute-0 recursing_rhodes[192116]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  3 01:17:32 compute-0 systemd[1]: libpod-b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d.scope: Deactivated successfully.
Dec  3 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.545654902 +0000 UTC m=+0.298706481 container died b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-958752b5be4e6252bdea2658ce49e12ba65b7eb8d984288dcdc4dcc674cb625b-merged.mount: Deactivated successfully.
Dec  3 01:17:32 compute-0 podman[192102]: 2025-12-03 01:17:32.626723283 +0000 UTC m=+0.379774852 container remove b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d (image=quay.io/ceph/ceph:v18, name=recursing_rhodes, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:17:32 compute-0 systemd[1]: libpod-conmon-b5185f34524470b06cb63d168597b03f73979e9bae416328178138f46da6425d.scope: Deactivated successfully.
Dec  3 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.750662762 +0000 UTC m=+0.082746623 container create 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.715942944 +0000 UTC m=+0.048026855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:32 compute-0 systemd[1]: Started libpod-conmon-58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023.scope.
Dec  3 01:17:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.895225175 +0000 UTC m=+0.227309026 container init 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.920271818 +0000 UTC m=+0.252355669 container start 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:17:32 compute-0 podman[192134]: 2025-12-03 01:17:32.926735115 +0000 UTC m=+0.258819006 container attach 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:33 compute-0 systemd[1]: libpod-58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023.scope: Deactivated successfully.
Dec  3 01:17:33 compute-0 podman[192134]: 2025-12-03 01:17:33.057509992 +0000 UTC m=+0.389593833 container died 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e209abc4648d559eb670875efe7c614824725f41a5bb77eacc9e5d84e8a97a8-merged.mount: Deactivated successfully.
Dec  3 01:17:33 compute-0 podman[192134]: 2025-12-03 01:17:33.145746668 +0000 UTC m=+0.477830529 container remove 58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023 (image=quay.io/ceph/ceph:v18, name=strange_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:17:33 compute-0 systemd[1]: libpod-conmon-58d052080155b8ed1b3e67d658b8fa174f2cbc6c36976f2d2a7604161336b023.scope: Deactivated successfully.
Dec  3 01:17:33 compute-0 systemd[1]: Reloading.
Dec  3 01:17:33 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:17:33 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:17:33 compute-0 systemd[1]: Reloading.
Dec  3 01:17:33 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:17:33 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:17:34 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec  3 01:17:34 compute-0 systemd[1]: Reloading.
Dec  3 01:17:34 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:17:34 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:17:34 compute-0 podman[192267]: 2025-12-03 01:17:34.191077655 +0000 UTC m=+0.144354038 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:17:34 compute-0 systemd[1]: Reached target Ceph cluster 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:17:34 compute-0 systemd[1]: Reloading.
Dec  3 01:17:34 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:17:34 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:17:34 compute-0 systemd[1]: Reloading.
Dec  3 01:17:35 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:17:35 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:17:35 compute-0 systemd[1]: Created slice Slice /system/ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:17:35 compute-0 systemd[1]: Reached target System Time Set.
Dec  3 01:17:35 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec  3 01:17:35 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.754652179 +0000 UTC m=+0.078305845 container create f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.721014747 +0000 UTC m=+0.044668463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.922195633 +0000 UTC m=+0.245849339 container init f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:17:35 compute-0 podman[192441]: 2025-12-03 01:17:35.936436361 +0000 UTC m=+0.260090017 container start f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:17:35 compute-0 bash[192441]: f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088
Dec  3 01:17:35 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:17:36 compute-0 ceph-mon[192460]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: pidfile_write: ignore empty --pid-file
Dec  3 01:17:36 compute-0 ceph-mon[192460]: load: jerasure load: lrc 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Git sha 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DB SUMMARY
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DB Session ID:  UO1TRDRI7DJ41Z0ZY1VU
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                     Options.env: 0x559f6004bc40
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                Options.info_log: 0x559f60e14e80
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                                 Options.wal_dir: 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.write_buffer_manager: 0x559f60e24b40
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.row_cache: None
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                              Options.wal_filter: None
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.wal_compression: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_background_jobs: 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.max_total_wal_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:       Options.compaction_readahead_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Compression algorithms supported:
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kZSTD supported: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:           Options.merge_operator: 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.compaction_filter: None
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f60e14a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f60e0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.write_buffer_size: 33554432
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:  Options.max_write_buffer_number: 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.compression: NoCompression
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.num_levels: 7
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 934233b3-95a6-4219-87ec-c9177c468bdc
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724656019708, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724656025137, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "UO1TRDRI7DJ41Z0ZY1VU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724656025343, "job": 1, "event": "recovery_finished"}
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559f60e36e00
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: DB pointer 0x559f60f40000
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:17:36 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f60e0d1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 01:17:36 compute-0 ceph-mon[192460]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@-1(???) e0 preinit fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  3 01:17:36 compute-0 ceph-mon[192460]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  3 01:17:36 compute-0 ceph-mon[192460]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-12-03T01:17:32.985354Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec  3 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.092952636 +0000 UTC m=+0.090281828 container create ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).mds e1 new map
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  3 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : fsmap 
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mkfs 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  3 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  3 01:17:36 compute-0 ceph-mon[192460]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.054154087 +0000 UTC m=+0.051483329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:36 compute-0 systemd[1]: Started libpod-conmon-ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760.scope.
Dec  3 01:17:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.280255873 +0000 UTC m=+0.277585105 container init ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.310344519 +0000 UTC m=+0.307673711 container start ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.317442112 +0000 UTC m=+0.314771304 container attach ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:17:36 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  3 01:17:36 compute-0 ceph-mon[192460]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834612785' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:  cluster:
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    id:     3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    health: HEALTH_OK
Dec  3 01:17:36 compute-0 priceless_franklin[192513]: 
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:  services:
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    mon: 1 daemons, quorum compute-0 (age 0.689301s)
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    mgr: no daemons active
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    osd: 0 osds: 0 up, 0 in
Dec  3 01:17:36 compute-0 priceless_franklin[192513]: 
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:  data:
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    pools:   0 pools, 0 pgs
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    objects: 0 objects, 0 B
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    usage:   0 B used, 0 B / 0 B avail
Dec  3 01:17:36 compute-0 priceless_franklin[192513]:    pgs:     
Dec  3 01:17:36 compute-0 priceless_franklin[192513]: 
Dec  3 01:17:36 compute-0 systemd[1]: libpod-ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760.scope: Deactivated successfully.
Dec  3 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.798601191 +0000 UTC m=+0.795930383 container died ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8b73a01ca0a6fb40b21217e74a68294fd60c34096104bbc9aafc622916204d0-merged.mount: Deactivated successfully.
Dec  3 01:17:36 compute-0 podman[192461]: 2025-12-03 01:17:36.893911171 +0000 UTC m=+0.891240363 container remove ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760 (image=quay.io/ceph/ceph:v18, name=priceless_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:17:36 compute-0 systemd[1]: libpod-conmon-ce7c140a0e2124877a5688a949666d9653f9045dcbfd191f0cba6d3f471f1760.scope: Deactivated successfully.
Dec  3 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.041315543 +0000 UTC m=+0.100875716 container create 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:36.99536698 +0000 UTC m=+0.054927173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:37 compute-0 systemd[1]: Started libpod-conmon-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope.
Dec  3 01:17:37 compute-0 ceph-mon[192460]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 01:17:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.196076806 +0000 UTC m=+0.255636989 container init 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.22489184 +0000 UTC m=+0.284452023 container start 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.232482435 +0000 UTC m=+0.292042668 container attach 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 01:17:37 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  3 01:17:37 compute-0 ceph-mon[192460]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 01:17:37 compute-0 ceph-mon[192460]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 01:17:37 compute-0 xenodochial_lewin[192565]: 
Dec  3 01:17:37 compute-0 xenodochial_lewin[192565]: [global]
Dec  3 01:17:37 compute-0 xenodochial_lewin[192565]: #011fsid = 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:37 compute-0 xenodochial_lewin[192565]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  3 01:17:37 compute-0 xenodochial_lewin[192565]: #011osd_crush_chooseleaf_type = 0
Dec  3 01:17:37 compute-0 systemd[1]: libpod-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope: Deactivated successfully.
Dec  3 01:17:37 compute-0 conmon[192565]: conmon 59145c025e99201709c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope/container/memory.events
Dec  3 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.711052451 +0000 UTC m=+0.770612634 container died 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c58d2a3838ad7b1289a82aa5d8eae7569597b15a30dd35964823b91a60840e80-merged.mount: Deactivated successfully.
Dec  3 01:17:37 compute-0 podman[192551]: 2025-12-03 01:17:37.810111562 +0000 UTC m=+0.869671715 container remove 59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc (image=quay.io/ceph/ceph:v18, name=xenodochial_lewin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:17:37 compute-0 systemd[1]: libpod-conmon-59145c025e99201709c61ca3265178618ff149585066e8b7c3ce172cb49501fc.scope: Deactivated successfully.
Dec  3 01:17:37 compute-0 podman[192604]: 2025-12-03 01:17:37.897726963 +0000 UTC m=+0.064036696 container create 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:17:37 compute-0 podman[192604]: 2025-12-03 01:17:37.872573669 +0000 UTC m=+0.038883392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:37 compute-0 systemd[1]: Started libpod-conmon-0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6.scope.
Dec  3 01:17:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.077308942 +0000 UTC m=+0.243618725 container init 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.093055557 +0000 UTC m=+0.259365290 container start 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.099783002 +0000 UTC m=+0.266092735 container attach 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:17:38 compute-0 ceph-mon[192460]: from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 01:17:38 compute-0 ceph-mon[192460]: from='client.? 192.168.122.100:0/1010632956' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 01:17:38 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:17:38 compute-0 ceph-mon[192460]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11155986' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:17:38 compute-0 systemd[1]: libpod-0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6.scope: Deactivated successfully.
Dec  3 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.55265152 +0000 UTC m=+0.718961253 container died 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5646323742b72bc9f11b1b20cd6bebb655fc90c498a954f6d993269c969382b9-merged.mount: Deactivated successfully.
Dec  3 01:17:38 compute-0 podman[192604]: 2025-12-03 01:17:38.640952728 +0000 UTC m=+0.807262421 container remove 0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6 (image=quay.io/ceph/ceph:v18, name=competent_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:17:38 compute-0 systemd[1]: libpod-conmon-0c73fdf5cbd70abab80a7da7cce28e50fe033eacd53b04688c0a84d044fd94c6.scope: Deactivated successfully.
Dec  3 01:17:38 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:17:39 compute-0 ceph-mon[192460]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  3 01:17:39 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  3 01:17:39 compute-0 ceph-mon[192460]: mon.compute-0@0(leader) e1 shutdown
Dec  3 01:17:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0[192456]: 2025-12-03T01:17:39.023+0000 7f8b7d467640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  3 01:17:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0[192456]: 2025-12-03T01:17:39.023+0000 7f8b7d467640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  3 01:17:39 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 01:17:39 compute-0 ceph-mon[192460]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 01:17:39 compute-0 podman[192687]: 2025-12-03 01:17:39.175919463 +0000 UTC m=+0.229095051 container died f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:17:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a609048b3c25870042e24f77e851cbb967507aa89e1bd1643fb30f7667c70e9-merged.mount: Deactivated successfully.
Dec  3 01:17:39 compute-0 podman[192687]: 2025-12-03 01:17:39.249668625 +0000 UTC m=+0.302844213 container remove f70b1c63b5f4737aa0f2e3104452100bd315e1afb4072c6b4a36af57baa73088 (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:17:39 compute-0 bash[192687]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0
Dec  3 01:17:39 compute-0 podman[192712]: 2025-12-03 01:17:39.372333172 +0000 UTC m=+0.104576046 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git)
Dec  3 01:17:39 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mon.compute-0.service: Deactivated successfully.
Dec  3 01:17:39 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:17:39 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mon.compute-0.service: Consumed 2.121s CPU time.
Dec  3 01:17:39 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:17:39 compute-0 podman[192802]: 2025-12-03 01:17:39.978848774 +0000 UTC m=+0.095908425 container create d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:17:40 compute-0 podman[192802]: 2025-12-03 01:17:39.943797498 +0000 UTC m=+0.060857219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17c90aaa0980dc893b967dd7a98ae702fc3b91f3d9d360a62eaa92221b12847/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:40 compute-0 podman[192802]: 2025-12-03 01:17:40.098218382 +0000 UTC m=+0.215278103 container init d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec  3 01:17:40 compute-0 podman[192802]: 2025-12-03 01:17:40.123224143 +0000 UTC m=+0.240283804 container start d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:17:40 compute-0 bash[192802]: d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b
Dec  3 01:17:40 compute-0 systemd[1]: Started Ceph mon.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:17:40 compute-0 ceph-mon[192821]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:17:40 compute-0 ceph-mon[192821]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: pidfile_write: ignore empty --pid-file
Dec  3 01:17:40 compute-0 ceph-mon[192821]: load: jerasure load: lrc 
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Git sha 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DB SUMMARY
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DB Session ID:  8J96JYHVNMM2V9HBWT3Y
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                     Options.env: 0x559a0ab11c40
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                Options.info_log: 0x559a0b5bf040
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                                 Options.wal_dir: 
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.write_buffer_manager: 0x559a0b5ceb40
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.row_cache: None
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                              Options.wal_filter: None
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.wal_compression: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_background_jobs: 2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.max_total_wal_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:       Options.compaction_readahead_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Compression algorithms supported:
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kZSTD supported: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:           Options.merge_operator: 
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.compaction_filter: None
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559a0b5bec40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559a0b5b71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.write_buffer_size: 33554432
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:  Options.max_write_buffer_number: 2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.compression: NoCompression
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.num_levels: 7
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 934233b3-95a6-4219-87ec-c9177c468bdc
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724660185193, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724660188430, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724660, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724660188604, "job": 1, "event": "recovery_finished"}
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559a0b5e0e00
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: DB pointer 0x559a0b66a000
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 01:17:40 compute-0 ceph-mon[192821]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???) e1 preinit fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).mds e1 new map
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  3 01:17:40 compute-0 ceph-mon[192821]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap 
Dec  3 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  3 01:17:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  3 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.26712173 +0000 UTC m=+0.086030504 container create 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.236427039 +0000 UTC m=+0.055335823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:40 compute-0 systemd[1]: Started libpod-conmon-002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e.scope.
Dec  3 01:17:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.437653917 +0000 UTC m=+0.256562741 container init 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.465678512 +0000 UTC m=+0.284587296 container start 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.471909045 +0000 UTC m=+0.290817839 container attach 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:17:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Dec  3 01:17:40 compute-0 systemd[1]: libpod-002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e.scope: Deactivated successfully.
Dec  3 01:17:40 compute-0 podman[192822]: 2025-12-03 01:17:40.939227816 +0000 UTC m=+0.758136590 container died 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.966 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.967 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.967 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d1c7ae021e9ff4166d6da79bf78e4b54a953013bea90d655319c8f43538b9cd-merged.mount: Deactivated successfully.
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.996 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.998 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:40.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.002 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:17:41.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:17:41 compute-0 podman[192822]: 2025-12-03 01:17:41.023760051 +0000 UTC m=+0.842668785 container remove 002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:17:41 compute-0 systemd[1]: libpod-conmon-002e9d450ec5e2338be61cb938bc633b66d6ea5ea1306d010c2a0c1174dfed5e.scope: Deactivated successfully.
Dec  3 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.140944185 +0000 UTC m=+0.086896224 container create b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.106474973 +0000 UTC m=+0.052427102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:41 compute-0 systemd[1]: Started libpod-conmon-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope.
Dec  3 01:17:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.28187221 +0000 UTC m=+0.227824349 container init b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.297973103 +0000 UTC m=+0.243925172 container start b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.307433434 +0000 UTC m=+0.253385743 container attach b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:17:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Dec  3 01:17:41 compute-0 systemd[1]: libpod-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope: Deactivated successfully.
Dec  3 01:17:41 compute-0 conmon[192931]: conmon b8a2df7493435c0b5424 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope/container/memory.events
Dec  3 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.815972743 +0000 UTC m=+0.761924802 container died b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-701acbb52159b2b0d31d846ae377b4362b3d771258f3aad777bd46500aaf15af-merged.mount: Deactivated successfully.
Dec  3 01:17:41 compute-0 podman[192917]: 2025-12-03 01:17:41.893894647 +0000 UTC m=+0.839846706 container remove b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:17:41 compute-0 systemd[1]: libpod-conmon-b8a2df7493435c0b5424a64d9ecf640fcd5b60d5759191b88611bf70d6ac01e8.scope: Deactivated successfully.
Dec  3 01:17:41 compute-0 systemd[1]: Reloading.
Dec  3 01:17:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:17:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:17:42 compute-0 systemd[1]: Reloading.
Dec  3 01:17:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:17:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:17:42 compute-0 systemd[1]: Starting Ceph mgr.compute-0.rysove for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.420767342 +0000 UTC m=+0.091307032 container create b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.385697125 +0000 UTC m=+0.056236865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b1f3537d6eaaa5f98e53a91979fcc53e9ba737d44edc85c6b3b38011879166/merged/var/lib/ceph/mgr/ceph-compute-0.rysove supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.519657079 +0000 UTC m=+0.190196819 container init b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:17:43 compute-0 podman[193090]: 2025-12-03 01:17:43.546373952 +0000 UTC m=+0.216913632 container start b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:17:43 compute-0 bash[193090]: b81e9a34279123d4d10924068f04a6673437db50574802dc38a9eea052ed9afb
Dec  3 01:17:43 compute-0 systemd[1]: Started Ceph mgr.compute-0.rysove for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:17:43 compute-0 ceph-mgr[193109]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:17:43 compute-0 ceph-mgr[193109]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  3 01:17:43 compute-0 ceph-mgr[193109]: pidfile_write: ignore empty --pid-file
Dec  3 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.681909795 +0000 UTC m=+0.071863768 container create 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:17:43 compute-0 systemd[1]: Started libpod-conmon-7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e.scope.
Dec  3 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.660968263 +0000 UTC m=+0.050922266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:43 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'alerts'
Dec  3 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.810309733 +0000 UTC m=+0.200263806 container init 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.820108932 +0000 UTC m=+0.210062945 container start 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:17:43 compute-0 podman[193110]: 2025-12-03 01:17:43.825976526 +0000 UTC m=+0.215930539 container attach 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:17:43 compute-0 podman[193148]: 2025-12-03 01:17:43.89038705 +0000 UTC m=+0.155605344 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'balancer'
Dec  3 01:17:44 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:44.085+0000 7fca98514140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 01:17:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:17:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659027998' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:17:44 compute-0 zealous_cerf[193151]: 
Dec  3 01:17:44 compute-0 zealous_cerf[193151]: {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "health": {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "status": "HEALTH_OK",
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "checks": {},
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "mutes": []
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    },
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "election_epoch": 5,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "quorum": [
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        0
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    ],
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "quorum_names": [
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "compute-0"
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    ],
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "quorum_age": 4,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "monmap": {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "epoch": 1,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "min_mon_release_name": "reef",
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_mons": 1
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    },
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "osdmap": {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "epoch": 1,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_osds": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_up_osds": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "osd_up_since": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_in_osds": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "osd_in_since": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_remapped_pgs": 0
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    },
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "pgmap": {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "pgs_by_state": [],
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_pgs": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_pools": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_objects": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "data_bytes": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "bytes_used": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "bytes_avail": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "bytes_total": 0
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    },
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "fsmap": {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "epoch": 1,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "by_rank": [],
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "up:standby": 0
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    },
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "mgrmap": {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "available": false,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "num_standbys": 0,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "modules": [
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:            "iostat",
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:            "nfs",
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:            "restful"
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        ],
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "services": {}
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    },
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "servicemap": {
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "epoch": 1,
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:        "services": {}
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    },
Dec  3 01:17:44 compute-0 zealous_cerf[193151]:    "progress_events": {}
Dec  3 01:17:44 compute-0 zealous_cerf[193151]: }
Dec  3 01:17:44 compute-0 systemd[1]: libpod-7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e.scope: Deactivated successfully.
Dec  3 01:17:44 compute-0 podman[193110]: 2025-12-03 01:17:44.297317645 +0000 UTC m=+0.687271668 container died 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 01:17:44 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:44.334+0000 7fca98514140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 01:17:44 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'cephadm'
Dec  3 01:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d108ab2f7544e6b46b888ad4715cdf8bbb86356935b6e79d301ff1d8102918-merged.mount: Deactivated successfully.
Dec  3 01:17:44 compute-0 podman[193110]: 2025-12-03 01:17:44.386780451 +0000 UTC m=+0.776734434 container remove 7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e (image=quay.io/ceph/ceph:v18, name=zealous_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:17:44 compute-0 systemd[1]: libpod-conmon-7698cae6809da8495d498cb4a2e4a105496ca784ce39a497bccd51ddc3f27e9e.scope: Deactivated successfully.
Dec  3 01:17:46 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'crash'
Dec  3 01:17:46 compute-0 ceph-mgr[193109]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 01:17:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:46.518+0000 7fca98514140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 01:17:46 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'dashboard'
Dec  3 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.546217606 +0000 UTC m=+0.106380800 container create 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.510845762 +0000 UTC m=+0.071008996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:46 compute-0 systemd[1]: Started libpod-conmon-52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516.scope.
Dec  3 01:17:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.716128919 +0000 UTC m=+0.276292163 container init 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.746722356 +0000 UTC m=+0.306885550 container start 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:17:46 compute-0 podman[193219]: 2025-12-03 01:17:46.775062329 +0000 UTC m=+0.335225583 container attach 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:17:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:17:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672473183' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:17:47 compute-0 zen_ganguly[193236]: 
Dec  3 01:17:47 compute-0 zen_ganguly[193236]: {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "health": {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "status": "HEALTH_OK",
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "checks": {},
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "mutes": []
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    },
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "election_epoch": 5,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "quorum": [
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        0
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    ],
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "quorum_names": [
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "compute-0"
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    ],
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "quorum_age": 6,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "monmap": {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "epoch": 1,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "min_mon_release_name": "reef",
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_mons": 1
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    },
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "osdmap": {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "epoch": 1,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_osds": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_up_osds": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "osd_up_since": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_in_osds": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "osd_in_since": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_remapped_pgs": 0
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    },
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "pgmap": {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "pgs_by_state": [],
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_pgs": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_pools": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_objects": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "data_bytes": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "bytes_used": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "bytes_avail": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "bytes_total": 0
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    },
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "fsmap": {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "epoch": 1,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "by_rank": [],
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "up:standby": 0
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    },
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "mgrmap": {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "available": false,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "num_standbys": 0,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "modules": [
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:            "iostat",
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:            "nfs",
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:            "restful"
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        ],
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "services": {}
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    },
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "servicemap": {
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "epoch": 1,
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:        "services": {}
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    },
Dec  3 01:17:47 compute-0 zen_ganguly[193236]:    "progress_events": {}
Dec  3 01:17:47 compute-0 zen_ganguly[193236]: }
Dec  3 01:17:47 compute-0 systemd[1]: libpod-52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516.scope: Deactivated successfully.
Dec  3 01:17:47 compute-0 podman[193219]: 2025-12-03 01:17:47.219400188 +0000 UTC m=+0.779563382 container died 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb1c9a05059269a86c66946e14e3b8d796474476fb2a76cac22c8ebec7d1f51-merged.mount: Deactivated successfully.
Dec  3 01:17:47 compute-0 podman[193219]: 2025-12-03 01:17:47.30499923 +0000 UTC m=+0.865162394 container remove 52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516 (image=quay.io/ceph/ceph:v18, name=zen_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:17:47 compute-0 systemd[1]: libpod-conmon-52ec72f920218c46819b05418a19bba889d0f6b6405e90d95f6d2e6e95e92516.scope: Deactivated successfully.
Dec  3 01:17:47 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'devicehealth'
Dec  3 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'diskprediction_local'
Dec  3 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:48.189+0000 7fca98514140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  3 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  3 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]:  from numpy import show_config as show_numpy_config
Dec  3 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'influx'
Dec  3 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:48.684+0000 7fca98514140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 01:17:48 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'insights'
Dec  3 01:17:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:48.906+0000 7fca98514140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 01:17:49 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'iostat'
Dec  3 01:17:49 compute-0 ceph-mgr[193109]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 01:17:49 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'k8sevents'
Dec  3 01:17:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:49.350+0000 7fca98514140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.48421718 +0000 UTC m=+0.134438347 container create ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.41221621 +0000 UTC m=+0.062437427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:49 compute-0 systemd[1]: Started libpod-conmon-ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84.scope.
Dec  3 01:17:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.623677578 +0000 UTC m=+0.273898745 container init ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.638970822 +0000 UTC m=+0.289191969 container start ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:17:49 compute-0 podman[193276]: 2025-12-03 01:17:49.643349399 +0000 UTC m=+0.293570536 container attach ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:17:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661237115' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:17:50 compute-0 confident_morse[193293]: 
Dec  3 01:17:50 compute-0 confident_morse[193293]: {
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "health": {
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "status": "HEALTH_OK",
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "checks": {},
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "mutes": []
Dec  3 01:17:50 compute-0 confident_morse[193293]:    },
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "election_epoch": 5,
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "quorum": [
Dec  3 01:17:50 compute-0 confident_morse[193293]:        0
Dec  3 01:17:50 compute-0 confident_morse[193293]:    ],
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "quorum_names": [
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "compute-0"
Dec  3 01:17:50 compute-0 confident_morse[193293]:    ],
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "quorum_age": 9,
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "monmap": {
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "epoch": 1,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "min_mon_release_name": "reef",
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_mons": 1
Dec  3 01:17:50 compute-0 confident_morse[193293]:    },
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "osdmap": {
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "epoch": 1,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_osds": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_up_osds": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "osd_up_since": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_in_osds": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "osd_in_since": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_remapped_pgs": 0
Dec  3 01:17:50 compute-0 confident_morse[193293]:    },
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "pgmap": {
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "pgs_by_state": [],
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_pgs": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_pools": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_objects": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "data_bytes": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "bytes_used": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "bytes_avail": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "bytes_total": 0
Dec  3 01:17:50 compute-0 confident_morse[193293]:    },
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "fsmap": {
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "epoch": 1,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "by_rank": [],
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "up:standby": 0
Dec  3 01:17:50 compute-0 confident_morse[193293]:    },
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "mgrmap": {
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "available": false,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "num_standbys": 0,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "modules": [
Dec  3 01:17:50 compute-0 confident_morse[193293]:            "iostat",
Dec  3 01:17:50 compute-0 confident_morse[193293]:            "nfs",
Dec  3 01:17:50 compute-0 confident_morse[193293]:            "restful"
Dec  3 01:17:50 compute-0 confident_morse[193293]:        ],
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "services": {}
Dec  3 01:17:50 compute-0 confident_morse[193293]:    },
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "servicemap": {
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "epoch": 1,
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:17:50 compute-0 confident_morse[193293]:        "services": {}
Dec  3 01:17:50 compute-0 confident_morse[193293]:    },
Dec  3 01:17:50 compute-0 confident_morse[193293]:    "progress_events": {}
Dec  3 01:17:50 compute-0 confident_morse[193293]: }
Dec  3 01:17:50 compute-0 systemd[1]: libpod-ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84.scope: Deactivated successfully.
Dec  3 01:17:50 compute-0 podman[193276]: 2025-12-03 01:17:50.105748159 +0000 UTC m=+0.755969336 container died ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e78a6e97892619b9640ae639d1953c1e91d3aca32ddaccc44de005930597ec6a-merged.mount: Deactivated successfully.
Dec  3 01:17:50 compute-0 podman[193276]: 2025-12-03 01:17:50.183661603 +0000 UTC m=+0.833882770 container remove ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84 (image=quay.io/ceph/ceph:v18, name=confident_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 01:17:50 compute-0 systemd[1]: libpod-conmon-ccb7c39d81bf42a35dfe5522e0ba84d5fe333ca01edd4d78f8000bb528ae5c84.scope: Deactivated successfully.
Dec  3 01:17:51 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'localpool'
Dec  3 01:17:51 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mds_autoscaler'
Dec  3 01:17:51 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mirroring'
Dec  3 01:17:52 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'nfs'
Dec  3 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.319268886 +0000 UTC m=+0.091985749 container create f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.286886335 +0000 UTC m=+0.059603248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:52 compute-0 systemd[1]: Started libpod-conmon-f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be.scope.
Dec  3 01:17:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.490419549 +0000 UTC m=+0.263136462 container init f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.50397489 +0000 UTC m=+0.276691723 container start f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:17:52 compute-0 podman[193331]: 2025-12-03 01:17:52.51093844 +0000 UTC m=+0.283655293 container attach f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:17:52 compute-0 ceph-mgr[193109]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 01:17:52 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'orchestrator'
Dec  3 01:17:52 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:52.914+0000 7fca98514140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 01:17:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:17:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636514488' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:17:52 compute-0 musing_pascal[193347]: 
Dec  3 01:17:52 compute-0 musing_pascal[193347]: {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "health": {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "status": "HEALTH_OK",
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "checks": {},
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "mutes": []
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    },
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "election_epoch": 5,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "quorum": [
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        0
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    ],
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "quorum_names": [
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "compute-0"
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    ],
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "quorum_age": 12,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "monmap": {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "epoch": 1,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "min_mon_release_name": "reef",
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_mons": 1
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    },
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "osdmap": {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "epoch": 1,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_osds": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_up_osds": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "osd_up_since": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_in_osds": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "osd_in_since": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_remapped_pgs": 0
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    },
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "pgmap": {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "pgs_by_state": [],
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_pgs": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_pools": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_objects": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "data_bytes": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "bytes_used": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "bytes_avail": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "bytes_total": 0
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    },
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "fsmap": {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "epoch": 1,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "by_rank": [],
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "up:standby": 0
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    },
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "mgrmap": {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "available": false,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "num_standbys": 0,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "modules": [
Dec  3 01:17:52 compute-0 musing_pascal[193347]:            "iostat",
Dec  3 01:17:52 compute-0 musing_pascal[193347]:            "nfs",
Dec  3 01:17:52 compute-0 musing_pascal[193347]:            "restful"
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        ],
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "services": {}
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    },
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "servicemap": {
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "epoch": 1,
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:17:52 compute-0 musing_pascal[193347]:        "services": {}
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    },
Dec  3 01:17:52 compute-0 musing_pascal[193347]:    "progress_events": {}
Dec  3 01:17:52 compute-0 musing_pascal[193347]: }
Dec  3 01:17:53 compute-0 systemd[1]: libpod-f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be.scope: Deactivated successfully.
Dec  3 01:17:53 compute-0 podman[193331]: 2025-12-03 01:17:53.004718448 +0000 UTC m=+0.777435311 container died f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-59d94a939d743357a56cff4c30be32590bef37be9ef55bf970ce89212ffce0f4-merged.mount: Deactivated successfully.
Dec  3 01:17:53 compute-0 podman[193331]: 2025-12-03 01:17:53.103111073 +0000 UTC m=+0.875827906 container remove f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be (image=quay.io/ceph/ceph:v18, name=musing_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:17:53 compute-0 systemd[1]: libpod-conmon-f6c27d908a544a83088242a459004fa2f593ee6687d55f36935aa738bb88a4be.scope: Deactivated successfully.
Dec  3 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_perf_query'
Dec  3 01:17:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:53.600+0000 7fca98514140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 01:17:53 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_support'
Dec  3 01:17:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:53.883+0000 7fca98514140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'pg_autoscaler'
Dec  3 01:17:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:54.128+0000 7fca98514140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'progress'
Dec  3 01:17:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:54.403+0000 7fca98514140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 01:17:54 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'prometheus'
Dec  3 01:17:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:54.640+0000 7fca98514140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.236896691 +0000 UTC m=+0.088630537 container create adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.203647898 +0000 UTC m=+0.055381764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:55 compute-0 systemd[1]: Started libpod-conmon-adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08.scope.
Dec  3 01:17:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.411939409 +0000 UTC m=+0.263673335 container init adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.429801185 +0000 UTC m=+0.281535041 container start adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.436983101 +0000 UTC m=+0.288717007 container attach adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 01:17:55 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:55.627+0000 7fca98514140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rbd_support'
Dec  3 01:17:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:17:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824690462' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:17:55 compute-0 peaceful_elion[193400]: 
Dec  3 01:17:55 compute-0 peaceful_elion[193400]: {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "health": {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "status": "HEALTH_OK",
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "checks": {},
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "mutes": []
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    },
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "election_epoch": 5,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "quorum": [
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        0
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    ],
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "quorum_names": [
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "compute-0"
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    ],
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "quorum_age": 15,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "monmap": {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "epoch": 1,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "min_mon_release_name": "reef",
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_mons": 1
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    },
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "osdmap": {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "epoch": 1,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_osds": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_up_osds": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "osd_up_since": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_in_osds": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "osd_in_since": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_remapped_pgs": 0
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    },
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "pgmap": {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "pgs_by_state": [],
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_pgs": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_pools": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_objects": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "data_bytes": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "bytes_used": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "bytes_avail": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "bytes_total": 0
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    },
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "fsmap": {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "epoch": 1,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "by_rank": [],
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "up:standby": 0
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    },
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "mgrmap": {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "available": false,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "num_standbys": 0,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "modules": [
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:            "iostat",
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:            "nfs",
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:            "restful"
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        ],
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "services": {}
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    },
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "servicemap": {
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "epoch": 1,
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:        "services": {}
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    },
Dec  3 01:17:55 compute-0 peaceful_elion[193400]:    "progress_events": {}
Dec  3 01:17:55 compute-0 peaceful_elion[193400]: }
Dec  3 01:17:55 compute-0 systemd[1]: libpod-adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08.scope: Deactivated successfully.
Dec  3 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.918650492 +0000 UTC m=+0.770384348 container died adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 01:17:55 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'restful'
Dec  3 01:17:55 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:55.919+0000 7fca98514140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 01:17:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-eabc1b10fa3f8aa6ac4ae6f9a5a13fe55fc8f528a8f21a7360d6757fbfceafad-merged.mount: Deactivated successfully.
Dec  3 01:17:55 compute-0 podman[193384]: 2025-12-03 01:17:55.988800697 +0000 UTC m=+0.840534523 container remove adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08 (image=quay.io/ceph/ceph:v18, name=peaceful_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:56 compute-0 systemd[1]: libpod-conmon-adcc461633371adb05942ba48819eda210bc7c0d667e8f48717a1d8c48c03b08.scope: Deactivated successfully.
Dec  3 01:17:56 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rgw'
Dec  3 01:17:57 compute-0 ceph-mgr[193109]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 01:17:57 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rook'
Dec  3 01:17:57 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:57.280+0000 7fca98514140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 01:17:58 compute-0 podman[193437]: 2025-12-03 01:17:58.087367774 +0000 UTC m=+0.056315317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'selftest'
Dec  3 01:17:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:59.288+0000 7fca98514140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.378123267 +0000 UTC m=+1.347070760 container create 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:17:59 compute-0 systemd[1]: Started libpod-conmon-8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1.scope.
Dec  3 01:17:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'snap_schedule'
Dec  3 01:17:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:59.531+0000 7fca98514140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.532992462 +0000 UTC m=+1.501940005 container init 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.546855719 +0000 UTC m=+1.515803212 container start 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:17:59 compute-0 podman[193437]: 2025-12-03 01:17:59.555049864 +0000 UTC m=+1.523997407 container attach 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 01:17:59 compute-0 podman[158098]: time="2025-12-03T01:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23488 "" "Go-http-client/1.1"
Dec  3 01:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Dec  3 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 01:17:59 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'stats'
Dec  3 01:17:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:17:59.788+0000 7fca98514140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 01:18:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:18:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137026255' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'status'
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]: 
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]: {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "health": {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "status": "HEALTH_OK",
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "checks": {},
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "mutes": []
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    },
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "election_epoch": 5,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "quorum": [
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        0
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    ],
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "quorum_names": [
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "compute-0"
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    ],
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "quorum_age": 19,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "monmap": {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "epoch": 1,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "min_mon_release_name": "reef",
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_mons": 1
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    },
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "osdmap": {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "epoch": 1,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_osds": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_up_osds": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "osd_up_since": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_in_osds": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "osd_in_since": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_remapped_pgs": 0
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    },
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "pgmap": {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "pgs_by_state": [],
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_pgs": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_pools": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_objects": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "data_bytes": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "bytes_used": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "bytes_avail": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "bytes_total": 0
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    },
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "fsmap": {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "epoch": 1,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "by_rank": [],
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "up:standby": 0
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    },
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "mgrmap": {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "available": false,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "num_standbys": 0,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "modules": [
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:            "iostat",
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:            "nfs",
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:            "restful"
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        ],
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "services": {}
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    },
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "servicemap": {
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "epoch": 1,
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:        "services": {}
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    },
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]:    "progress_events": {}
Dec  3 01:18:00 compute-0 dazzling_hermann[193454]: }
Dec  3 01:18:00 compute-0 systemd[1]: libpod-8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1.scope: Deactivated successfully.
Dec  3 01:18:00 compute-0 podman[193437]: 2025-12-03 01:18:00.057008078 +0000 UTC m=+2.025955581 container died 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-948935d15c868dcbdf7fbda44974eff029cef04bc6d6be85c947adf14fa540ae-merged.mount: Deactivated successfully.
Dec  3 01:18:00 compute-0 podman[193437]: 2025-12-03 01:18:00.170093227 +0000 UTC m=+2.139040690 container remove 8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1 (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:18:00 compute-0 systemd[1]: libpod-conmon-8a09aafbcbc7727f9f1ae8a6918664b206a78597cab6351874441b4035f872b1.scope: Deactivated successfully.
Dec  3 01:18:00 compute-0 podman[193481]: 2025-12-03 01:18:00.227595823 +0000 UTC m=+0.119595546 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:18:00 compute-0 podman[193489]: 2025-12-03 01:18:00.235888601 +0000 UTC m=+0.112999597 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:18:00 compute-0 podman[193487]: 2025-12-03 01:18:00.239485224 +0000 UTC m=+0.140711021 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, distribution-scope=public)
Dec  3 01:18:00 compute-0 podman[193494]: 2025-12-03 01:18:00.26765699 +0000 UTC m=+0.141396600 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telegraf'
Dec  3 01:18:00 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:00.316+0000 7fca98514140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 01:18:00 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:00.546+0000 7fca98514140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 01:18:00 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telemetry'
Dec  3 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'test_orchestrator'
Dec  3 01:18:01 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:01.115+0000 7fca98514140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:18:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:18:01 compute-0 openstack_network_exporter[160250]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:18:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 01:18:01 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'volumes'
Dec  3 01:18:01 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:01.745+0000 7fca98514140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.333126737 +0000 UTC m=+0.118833144 container create 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.266220731 +0000 UTC m=+0.051927158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:02 compute-0 systemd[1]: Started libpod-conmon-59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562.scope.
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'zabbix'
Dec  3 01:18:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:02.425+0000 7fca98514140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 01:18:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.490886504 +0000 UTC m=+0.276592921 container init 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.521727008 +0000 UTC m=+0.307433385 container start 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.529274474 +0000 UTC m=+0.314980881 container attach 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 01:18:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:02.666+0000 7fca98514140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: ms_deliver_dispatch: unhandled message 0x562b3e82d1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rysove
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr handle_mgr_map Activating!
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr handle_mgr_map I am now activating
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.rysove(active, starting, since 0.0209008s)
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e1 all = 1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"}]: dispatch
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: balancer
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: crash
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] Starting
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Manager daemon compute-0.rysove is now available
Dec  3 01:18:02 compute-0 ceph-mon[192821]: Activating manager daemon compute-0.rysove
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:18:02
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [balancer INFO root] No pools available
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: devicehealth
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: iostat
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Starting
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: nfs
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: orchestrator
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: pg_autoscaler
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: progress
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] Loading...
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] No stored events to load
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded [] historic events
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded OSDMap, ready.
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] recovery thread starting
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] starting setup
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: rbd_support
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: restful
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: status
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [restful INFO root] server_addr: :: server_port: 8003
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [restful WARNING root] server not running: no certificate configured
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: telemetry
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] PerfHandler: starting
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TaskHandler: starting
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: [rbd_support INFO root] setup complete
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec  3 01:18:02 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: volumes
Dec  3 01:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001752721' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]: 
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]: {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "health": {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "status": "HEALTH_OK",
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "checks": {},
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "mutes": []
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    },
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "election_epoch": 5,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "quorum": [
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        0
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    ],
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "quorum_names": [
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "compute-0"
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    ],
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "quorum_age": 22,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "monmap": {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "epoch": 1,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "min_mon_release_name": "reef",
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_mons": 1
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    },
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "osdmap": {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "epoch": 1,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_osds": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_up_osds": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "osd_up_since": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_in_osds": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "osd_in_since": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_remapped_pgs": 0
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    },
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "pgmap": {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "pgs_by_state": [],
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_pgs": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_pools": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_objects": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "data_bytes": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "bytes_used": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "bytes_avail": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "bytes_total": 0
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    },
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "fsmap": {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "epoch": 1,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "by_rank": [],
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "up:standby": 0
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    },
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "mgrmap": {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "available": false,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "num_standbys": 0,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "modules": [
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:            "iostat",
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:            "nfs",
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:            "restful"
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        ],
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "services": {}
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    },
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "servicemap": {
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "epoch": 1,
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:        "services": {}
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    },
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]:    "progress_events": {}
Dec  3 01:18:02 compute-0 epic_chaplygin[193596]: }
Dec  3 01:18:02 compute-0 systemd[1]: libpod-59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562.scope: Deactivated successfully.
Dec  3 01:18:02 compute-0 podman[193580]: 2025-12-03 01:18:02.977117518 +0000 UTC m=+0.762823975 container died 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-eabac7816a09ffc5ebd412a76886a5a1e54cd030ba10a0bcd5d74680bb247088-merged.mount: Deactivated successfully.
Dec  3 01:18:03 compute-0 podman[193580]: 2025-12-03 01:18:03.065913851 +0000 UTC m=+0.851620238 container remove 59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562 (image=quay.io/ceph/ceph:v18, name=epic_chaplygin, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:03 compute-0 systemd[1]: libpod-conmon-59ef0ca15039ab3d62bd7e0531260544f5c3261c587f8364d3b748bd1c91d562.scope: Deactivated successfully.
Dec  3 01:18:03 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.rysove(active, since 1.04119s)
Dec  3 01:18:03 compute-0 ceph-mon[192821]: Manager daemon compute-0.rysove is now available
Dec  3 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec  3 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec  3 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec  3 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec  3 01:18:03 compute-0 ceph-mon[192821]: from='mgr.14102 192.168.122.100:0/1648204686' entity='mgr.compute-0.rysove' 
Dec  3 01:18:04 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.rysove(active, since 2s)
Dec  3 01:18:04 compute-0 podman[193712]: 2025-12-03 01:18:04.893453574 +0000 UTC m=+0.137344184 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  3 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.200891618 +0000 UTC m=+0.097416001 container create 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:05 compute-0 systemd[1]: Started libpod-conmon-0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b.scope.
Dec  3 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.164251869 +0000 UTC m=+0.060776312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.351975304 +0000 UTC m=+0.248499697 container init 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.376099365 +0000 UTC m=+0.272623758 container start 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 01:18:05 compute-0 podman[193732]: 2025-12-03 01:18:05.382737625 +0000 UTC m=+0.279262018 container attach 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:18:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  3 01:18:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1104220266' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]: 
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]: {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "health": {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "status": "HEALTH_OK",
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "checks": {},
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "mutes": []
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    },
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "election_epoch": 5,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "quorum": [
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        0
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    ],
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "quorum_names": [
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "compute-0"
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    ],
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "quorum_age": 25,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "monmap": {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "epoch": 1,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "min_mon_release_name": "reef",
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_mons": 1
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    },
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "osdmap": {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "epoch": 1,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_osds": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_up_osds": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "osd_up_since": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_in_osds": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "osd_in_since": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_remapped_pgs": 0
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    },
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "pgmap": {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "pgs_by_state": [],
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_pgs": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_pools": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_objects": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "data_bytes": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "bytes_used": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "bytes_avail": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "bytes_total": 0
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    },
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "fsmap": {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "epoch": 1,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "by_rank": [],
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "up:standby": 0
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    },
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "mgrmap": {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "available": true,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "num_standbys": 0,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "modules": [
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:            "iostat",
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:            "nfs",
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:            "restful"
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        ],
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "services": {}
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    },
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "servicemap": {
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "epoch": 1,
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "modified": "2025-12-03T01:17:36.090330+0000",
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:        "services": {}
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    },
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]:    "progress_events": {}
Dec  3 01:18:06 compute-0 intelligent_roentgen[193749]: }
Dec  3 01:18:06 compute-0 systemd[1]: libpod-0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b.scope: Deactivated successfully.
Dec  3 01:18:06 compute-0 podman[193775]: 2025-12-03 01:18:06.168821316 +0000 UTC m=+0.061166033 container died 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-29ff7c17938349a3d0bf02bf9e78d191b2e4b9686a1d0fe8dd77622d2c9c10a6-merged.mount: Deactivated successfully.
Dec  3 01:18:06 compute-0 podman[193775]: 2025-12-03 01:18:06.270904739 +0000 UTC m=+0.163249456 container remove 0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b (image=quay.io/ceph/ceph:v18, name=intelligent_roentgen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:18:06 compute-0 systemd[1]: libpod-conmon-0f538f75a1ec5716486008907dc700f7bb3a89d2eefbf8b80aa7e02f00cc403b.scope: Deactivated successfully.
Dec  3 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.422423948 +0000 UTC m=+0.092044007 container create 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.386273023 +0000 UTC m=+0.055893122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:06 compute-0 systemd[1]: Started libpod-conmon-544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53.scope.
Dec  3 01:18:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.589004949 +0000 UTC m=+0.258625058 container init 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.606819729 +0000 UTC m=+0.276439778 container start 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:06 compute-0 podman[193790]: 2025-12-03 01:18:06.61315838 +0000 UTC m=+0.282778489 container attach 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:18:06 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  3 01:18:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2333107779' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 01:18:07 compute-0 systemd[1]: libpod-544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53.scope: Deactivated successfully.
Dec  3 01:18:07 compute-0 podman[193790]: 2025-12-03 01:18:07.192322296 +0000 UTC m=+0.861942375 container died 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-838ae576b7637b59bb2f5bfca0aa18d3c0a2b6a58e2c7847446a1a6cf8dc9287-merged.mount: Deactivated successfully.
Dec  3 01:18:07 compute-0 podman[193790]: 2025-12-03 01:18:07.273732777 +0000 UTC m=+0.943352836 container remove 544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53 (image=quay.io/ceph/ceph:v18, name=keen_haslett, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:07 compute-0 systemd[1]: libpod-conmon-544f13e5f6846048ebe5af697a8b573550d31defa73dd61e6cffaa4295ea6b53.scope: Deactivated successfully.
Dec  3 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.380850404 +0000 UTC m=+0.072684002 container create 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.346084959 +0000 UTC m=+0.037918597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:07 compute-0 systemd[1]: Started libpod-conmon-266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41.scope.
Dec  3 01:18:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.541173615 +0000 UTC m=+0.233007203 container init 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.556946737 +0000 UTC m=+0.248780335 container start 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:07 compute-0 podman[193843]: 2025-12-03 01:18:07.564716729 +0000 UTC m=+0.256550357 container attach 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 01:18:07 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2333107779' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 01:18:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Dec  3 01:18:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:08 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  3 01:18:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  1: '-n'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  2: 'mgr.compute-0.rysove'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  3: '-f'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  4: '--setuser'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  5: 'ceph'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  6: '--setgroup'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  7: 'ceph'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  8: '--default-log-to-file=false'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  9: '--default-log-to-journald=true'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: mgr respawn  exe_path /proc/self/exe
Dec  3 01:18:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.rysove(active, since 6s)
Dec  3 01:18:08 compute-0 systemd[1]: libpod-266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41.scope: Deactivated successfully.
Dec  3 01:18:08 compute-0 podman[193843]: 2025-12-03 01:18:08.863359107 +0000 UTC m=+1.555192705 container died 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c7c6be2996c2b69dd0ee10083e283f8de7c9f6bdf048a7974f6e99056eaff6d-merged.mount: Deactivated successfully.
Dec  3 01:18:08 compute-0 podman[193843]: 2025-12-03 01:18:08.929151081 +0000 UTC m=+1.620984639 container remove 266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41 (image=quay.io/ceph/ceph:v18, name=suspicious_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:18:08 compute-0 systemd[1]: libpod-conmon-266e2cabd687d64ab72828561984c0b5a520403518505c6dd8ab04c079808e41.scope: Deactivated successfully.
Dec  3 01:18:08 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: ignoring --setuser ceph since I am not root
Dec  3 01:18:08 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: ignoring --setgroup ceph since I am not root
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  3 01:18:08 compute-0 ceph-mgr[193109]: pidfile_write: ignore empty --pid-file
Dec  3 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.041846668 +0000 UTC m=+0.087893728 container create eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.010116859 +0000 UTC m=+0.056163969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:09 compute-0 systemd[1]: Started libpod-conmon-eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e.scope.
Dec  3 01:18:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'alerts'
Dec  3 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.196188178 +0000 UTC m=+0.242235248 container init eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.226261879 +0000 UTC m=+0.272308919 container start eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.232995292 +0000 UTC m=+0.279042352 container attach eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'balancer'
Dec  3 01:18:09 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:09.476+0000 7fac125fd140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 01:18:09 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:09.714+0000 7fac125fd140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 01:18:09 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'cephadm'
Dec  3 01:18:09 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4209809939' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  3 01:18:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  3 01:18:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946545370' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  3 01:18:09 compute-0 tender_elion[193933]: {
Dec  3 01:18:09 compute-0 tender_elion[193933]:    "epoch": 5,
Dec  3 01:18:09 compute-0 tender_elion[193933]:    "available": true,
Dec  3 01:18:09 compute-0 tender_elion[193933]:    "active_name": "compute-0.rysove",
Dec  3 01:18:09 compute-0 tender_elion[193933]:    "num_standby": 0
Dec  3 01:18:09 compute-0 tender_elion[193933]: }
Dec  3 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.861262833 +0000 UTC m=+0.907309863 container died eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:18:09 compute-0 systemd[1]: libpod-eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e.scope: Deactivated successfully.
Dec  3 01:18:09 compute-0 podman[193957]: 2025-12-03 01:18:09.870070315 +0000 UTC m=+0.126429121 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, release=1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec  3 01:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0626ad42801a56b76fabfa4f558cce1f395cf22eaa096b1e8e1d2ac71a5d78d-merged.mount: Deactivated successfully.
Dec  3 01:18:09 compute-0 podman[193894]: 2025-12-03 01:18:09.926785259 +0000 UTC m=+0.972832289 container remove eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e (image=quay.io/ceph/ceph:v18, name=tender_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:09 compute-0 systemd[1]: libpod-conmon-eaae965a3db105e92dea22149616c4ab7108000c1ded5b20f01cf704bba1b19e.scope: Deactivated successfully.
Dec  3 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.016701714 +0000 UTC m=+0.061092490 container create 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:18:10 compute-0 systemd[1]: Started libpod-conmon-02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8.scope.
Dec  3 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:09.993646854 +0000 UTC m=+0.038037730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.145247315 +0000 UTC m=+0.189638131 container init 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.165490865 +0000 UTC m=+0.209881661 container start 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:10 compute-0 podman[193988]: 2025-12-03 01:18:10.172123265 +0000 UTC m=+0.216514061 container attach 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:11 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'crash'
Dec  3 01:18:11 compute-0 ceph-mgr[193109]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 01:18:11 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'dashboard'
Dec  3 01:18:11 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:11.947+0000 7fac125fd140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 01:18:13 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'devicehealth'
Dec  3 01:18:13 compute-0 ceph-mgr[193109]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 01:18:13 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:13.586+0000 7fac125fd140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  3 01:18:13 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'diskprediction_local'
Dec  3 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  3 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  3 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]:  from numpy import show_config as show_numpy_config
Dec  3 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:14.092+0000 7fac125fd140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  3 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'influx'
Dec  3 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:14.317+0000 7fac125fd140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  3 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'insights'
Dec  3 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'iostat'
Dec  3 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 01:18:14 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:14.783+0000 7fac125fd140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  3 01:18:14 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'k8sevents'
Dec  3 01:18:14 compute-0 podman[194041]: 2025-12-03 01:18:14.816383218 +0000 UTC m=+0.113896152 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:18:16 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'localpool'
Dec  3 01:18:16 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mds_autoscaler'
Dec  3 01:18:17 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'mirroring'
Dec  3 01:18:17 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'nfs'
Dec  3 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 01:18:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:18.244+0000 7fac125fd140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  3 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'orchestrator'
Dec  3 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 01:18:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:18.889+0000 7fac125fd140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  3 01:18:18 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_perf_query'
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.147+0000 7fac125fd140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'osd_support'
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.375+0000 7fac125fd140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'pg_autoscaler'
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.640+0000 7fac125fd140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'progress'
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:19.866+0000 7fac125fd140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  3 01:18:19 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'prometheus'
Dec  3 01:18:20 compute-0 ceph-mgr[193109]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 01:18:20 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:20.825+0000 7fac125fd140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  3 01:18:20 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rbd_support'
Dec  3 01:18:21 compute-0 ceph-mgr[193109]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 01:18:21 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:21.110+0000 7fac125fd140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  3 01:18:21 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'restful'
Dec  3 01:18:21 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rgw'
Dec  3 01:18:22 compute-0 ceph-mgr[193109]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 01:18:22 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:22.481+0000 7fac125fd140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  3 01:18:22 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'rook'
Dec  3 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 01:18:24 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:24.589+0000 7fac125fd140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  3 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'selftest'
Dec  3 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 01:18:24 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'snap_schedule'
Dec  3 01:18:24 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:24.870+0000 7fac125fd140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  3 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 01:18:25 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:25.138+0000 7fac125fd140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  3 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'stats'
Dec  3 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'status'
Dec  3 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 01:18:25 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:25.636+0000 7fac125fd140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  3 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telegraf'
Dec  3 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 01:18:25 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:25.874+0000 7fac125fd140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  3 01:18:25 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'telemetry'
Dec  3 01:18:26 compute-0 ceph-mgr[193109]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 01:18:26 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:26.473+0000 7fac125fd140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  3 01:18:26 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'test_orchestrator'
Dec  3 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 01:18:27 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:27.132+0000 7fac125fd140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  3 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'volumes'
Dec  3 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 01:18:27 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:27.864+0000 7fac125fd140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  3 01:18:27 compute-0 ceph-mgr[193109]: mgr[py] Loading python module 'zabbix'
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 01:18:28 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T01:18:28.102+0000 7fac125fd140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: ms_deliver_dispatch: unhandled message 0x556670a1f1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rysove restarted
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rysove
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.rysove(active, starting, since 0.029257s)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr handle_mgr_map Activating!
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr handle_mgr_map I am now activating
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rysove", "id": "compute-0.rysove"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e1 all = 1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mon[192821]: Active manager daemon compute-0.rysove restarted
Dec  3 01:18:28 compute-0 ceph-mon[192821]: Activating manager daemon compute-0.rysove
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: balancer
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Starting
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:18:28
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] No pools available
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Manager daemon compute-0.rysove is now available
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: cephadm
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: crash
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: devicehealth
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: iostat
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: nfs
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: orchestrator
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: pg_autoscaler
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Starting
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: progress
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] Loading...
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] No stored events to load
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded [] historic events
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [progress INFO root] Loaded OSDMap, ready.
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] recovery thread starting
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] starting setup
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: rbd_support
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: restful
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [restful INFO root] server_addr: :: server_port: 8003
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [restful WARNING root] server not running: no certificate configured
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: status
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: telemetry
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] PerfHandler: starting
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TaskHandler: starting
Dec  3 01:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"} v 0) v1
Dec  3 01:18:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] setup complete
Dec  3 01:18:28 compute-0 ceph-mgr[193109]: mgr load Constructed class from module: volumes
Dec  3 01:18:29 compute-0 ceph-mon[192821]: Manager daemon compute-0.rysove is now available
Dec  3 01:18:29 compute-0 ceph-mon[192821]: Found migration_current of "None". Setting to last migration.
Dec  3 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/mirror_snapshot_schedule"}]: dispatch
Dec  3 01:18:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rysove/trash_purge_schedule"}]: dispatch
Dec  3 01:18:29 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.rysove(active, since 1.12063s)
Dec  3 01:18:29 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  3 01:18:29 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  3 01:18:29 compute-0 optimistic_kilby[194004]: {
Dec  3 01:18:29 compute-0 optimistic_kilby[194004]:    "mgrmap_epoch": 7,
Dec  3 01:18:29 compute-0 optimistic_kilby[194004]:    "initialized": true
Dec  3 01:18:29 compute-0 optimistic_kilby[194004]: }
Dec  3 01:18:29 compute-0 podman[193988]: 2025-12-03 01:18:29.287849614 +0000 UTC m=+19.332240410 container died 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:29 compute-0 systemd[1]: libpod-02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8.scope: Deactivated successfully.
Dec  3 01:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0d64fb510050882907986cfd80e9c18378061cdc38577dbe1fc89271996dd44-merged.mount: Deactivated successfully.
Dec  3 01:18:29 compute-0 podman[193988]: 2025-12-03 01:18:29.375589856 +0000 UTC m=+19.419980662 container remove 02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8 (image=quay.io/ceph/ceph:v18, name=optimistic_kilby, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:29 compute-0 systemd[1]: libpod-conmon-02f581213590560279935f0ffc48383056870d50a372367270ff17b306ff17b8.scope: Deactivated successfully.
Dec  3 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.499305568 +0000 UTC m=+0.081548555 container create abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.467679492 +0000 UTC m=+0.049922549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:29 compute-0 systemd[1]: Started libpod-conmon-abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a.scope.
Dec  3 01:18:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.720051509 +0000 UTC m=+0.302294556 container init abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.737268212 +0000 UTC m=+0.319511209 container start abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:18:29 compute-0 podman[158098]: time="2025-12-03T01:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:18:29 compute-0 podman[194194]: 2025-12-03 01:18:29.750413369 +0000 UTC m=+0.332656386 container attach abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23492 "" "Go-http-client/1.1"
Dec  3 01:18:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Dec  3 01:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Dec  3 01:18:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Dec  3 01:18:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923277 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:18:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Dec  3 01:18:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 01:18:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 01:18:30 compute-0 systemd[1]: libpod-abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a.scope: Deactivated successfully.
Dec  3 01:18:30 compute-0 podman[194194]: 2025-12-03 01:18:30.388416469 +0000 UTC m=+0.970659456 container died abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a205d1e514f771893a22cdb4875bd30a25f1a19bcba6c088a897b8b9213f8f23-merged.mount: Deactivated successfully.
Dec  3 01:18:30 compute-0 podman[194194]: 2025-12-03 01:18:30.489198845 +0000 UTC m=+1.071441812 container remove abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a (image=quay.io/ceph/ceph:v18, name=wonderful_booth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:30 compute-0 systemd[1]: libpod-conmon-abd8a741f5691ec1ab814d53ac1e77aeeca3e2ab3382e305a9910e5897114f3a.scope: Deactivated successfully.
Dec  3 01:18:30 compute-0 podman[194246]: 2025-12-03 01:18:30.550679345 +0000 UTC m=+0.103354680 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.577514184 +0000 UTC m=+0.059383982 container create bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 01:18:30 compute-0 podman[194250]: 2025-12-03 01:18:30.581712744 +0000 UTC m=+0.120947714 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 01:18:30 compute-0 podman[194238]: 2025-12-03 01:18:30.583576577 +0000 UTC m=+0.147345580 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:18:30 compute-0 podman[194245]: 2025-12-03 01:18:30.58961115 +0000 UTC m=+0.148587526 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Bus STARTING
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Bus STARTING
Dec  3 01:18:30 compute-0 systemd[1]: Started libpod-conmon-bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf.scope.
Dec  3 01:18:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.555291407 +0000 UTC m=+0.037161225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.673207184 +0000 UTC m=+0.155077002 container init bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.691818117 +0000 UTC m=+0.173687905 container start bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:30 compute-0 podman[194286]: 2025-12-03 01:18:30.695675758 +0000 UTC m=+0.177545556 container attach bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Serving on https://192.168.122.100:7150
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Serving on https://192.168.122.100:7150
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Client ('192.168.122.100', 37812) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Client ('192.168.122.100', 37812) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 01:18:30 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rysove(active, since 2s)
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Serving on http://192.168.122.100:8765
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Serving on http://192.168.122.100:8765
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: [cephadm INFO cherrypy.error] [03/Dec/2025:01:18:30] ENGINE Bus STARTED
Dec  3 01:18:30 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : [03/Dec/2025:01:18:30] ENGINE Bus STARTED
Dec  3 01:18:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 01:18:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 01:18:31 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Dec  3 01:18:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:31 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_user
Dec  3 01:18:31 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  3 01:18:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Dec  3 01:18:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:31 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_config
Dec  3 01:18:31 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  3 01:18:31 compute-0 ceph-mgr[193109]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  3 01:18:31 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  3 01:18:31 compute-0 competent_bohr[194359]: ssh user set to ceph-admin. sudo will be used
Dec  3 01:18:31 compute-0 systemd[1]: libpod-bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf.scope: Deactivated successfully.
Dec  3 01:18:31 compute-0 podman[194286]: 2025-12-03 01:18:31.296863463 +0000 UTC m=+0.778733291 container died bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-36b4b9ee7bae800fdd1d48d74ec43cdfb7e36d82bf7048f0ac80cf9d3a0c643f-merged.mount: Deactivated successfully.
Dec  3 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Bus STARTING
Dec  3 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Serving on https://192.168.122.100:7150
Dec  3 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Client ('192.168.122.100', 37812) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Serving on http://192.168.122.100:8765
Dec  3 01:18:31 compute-0 ceph-mon[192821]: [03/Dec/2025:01:18:30] ENGINE Bus STARTED
Dec  3 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:31 compute-0 podman[194286]: 2025-12-03 01:18:31.379841019 +0000 UTC m=+0.861710827 container remove bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf (image=quay.io/ceph/ceph:v18, name=competent_bohr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:18:31 compute-0 systemd[1]: libpod-conmon-bb033e721b31ef5a86cb5f2240eb1b5992e40ddc7d0ce7f7b72ea04bb603edcf.scope: Deactivated successfully.
Dec  3 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:18:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:18:31 compute-0 openstack_network_exporter[160250]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:18:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.511480409 +0000 UTC m=+0.089748341 container create a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.481224893 +0000 UTC m=+0.059492865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:31 compute-0 systemd[1]: Started libpod-conmon-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope.
Dec  3 01:18:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.660035383 +0000 UTC m=+0.238303365 container init a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.688863859 +0000 UTC m=+0.267131811 container start a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:18:31 compute-0 podman[194406]: 2025-12-03 01:18:31.696074625 +0000 UTC m=+0.274342567 container attach a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:32 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:32 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Dec  3 01:18:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:32 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  3 01:18:32 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  3 01:18:32 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh private key
Dec  3 01:18:32 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  3 01:18:32 compute-0 systemd[1]: libpod-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope: Deactivated successfully.
Dec  3 01:18:32 compute-0 conmon[194422]: conmon a64e443648e2f219dff4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope/container/memory.events
Dec  3 01:18:32 compute-0 podman[194406]: 2025-12-03 01:18:32.347033036 +0000 UTC m=+0.925300988 container died a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:32 compute-0 ceph-mon[192821]: Set ssh ssh_user
Dec  3 01:18:32 compute-0 ceph-mon[192821]: Set ssh ssh_config
Dec  3 01:18:32 compute-0 ceph-mon[192821]: ssh user set to ceph-admin. sudo will be used
Dec  3 01:18:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d73d27ca8baf6a40e5a97ce3a913de10cf99936c72987724c952d72ef6dab3-merged.mount: Deactivated successfully.
Dec  3 01:18:32 compute-0 podman[194406]: 2025-12-03 01:18:32.440229495 +0000 UTC m=+1.018497417 container remove a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af (image=quay.io/ceph/ceph:v18, name=practical_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:18:32 compute-0 systemd[1]: libpod-conmon-a64e443648e2f219dff4fa36924369c2bad8fb8f21799898bda437c32222b2af.scope: Deactivated successfully.
Dec  3 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.562787295 +0000 UTC m=+0.082144834 container create 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.528966516 +0000 UTC m=+0.048324135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:32 compute-0 systemd[1]: Started libpod-conmon-43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7.scope.
Dec  3 01:18:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.713753908 +0000 UTC m=+0.233111507 container init 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.743880901 +0000 UTC m=+0.263238460 container start 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:18:32 compute-0 podman[194459]: 2025-12-03 01:18:32.751260802 +0000 UTC m=+0.270618381 container attach 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec  3 01:18:33 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Dec  3 01:18:33 compute-0 ceph-mon[192821]: Set ssh ssh_identity_key
Dec  3 01:18:33 compute-0 ceph-mon[192821]: Set ssh private key
Dec  3 01:18:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:33 compute-0 ceph-mgr[193109]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  3 01:18:33 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  3 01:18:33 compute-0 systemd[1]: libpod-43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7.scope: Deactivated successfully.
Dec  3 01:18:33 compute-0 podman[194501]: 2025-12-03 01:18:33.540190293 +0000 UTC m=+0.061667017 container died 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eb83ef47e8c887f1fc7dc7fb5ee833d6917f54fcbd58c1cfe481d731c6e8ef6-merged.mount: Deactivated successfully.
Dec  3 01:18:33 compute-0 podman[194501]: 2025-12-03 01:18:33.615800148 +0000 UTC m=+0.137276872 container remove 43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7 (image=quay.io/ceph/ceph:v18, name=vigilant_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:18:33 compute-0 systemd[1]: libpod-conmon-43c73db351d39d5b779503d400dabe7c03c5058cf7b52da724f3d8cb3271b1d7.scope: Deactivated successfully.
Dec  3 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.757088614 +0000 UTC m=+0.084721367 container create c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.722768631 +0000 UTC m=+0.050401424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:33 compute-0 systemd[1]: Started libpod-conmon-c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943.scope.
Dec  3 01:18:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.915064898 +0000 UTC m=+0.242697711 container init c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.935122152 +0000 UTC m=+0.262754865 container start c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:18:33 compute-0 podman[194515]: 2025-12-03 01:18:33.940595419 +0000 UTC m=+0.268228172 container attach c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:18:34 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:34 compute-0 ceph-mon[192821]: Set ssh ssh_identity_pub
Dec  3 01:18:34 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:34 compute-0 elastic_carver[194530]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5ecfe2fcU5kzXrwNXMXzjiRxUzwK8oF8fdrLszAVoAy+DUtmImeC+47peZMOsTTrqix8ydvHWOXKAlmxJmmvnXn+I3jRZZaTkPCTt4Je2ClFXcOH2FtcM0sjmtzxWSN38IOGsugf5cTRq79WQuzzM3ONhanjwmk4bl0EUtIJaiEP37pO4216tx58/XIIJwdYM/By5PWhy7thuZYyCCVVTxGWigCOmE/1ndCn6IIkeKZJLbfQBXCBIi/S/1QG1DRN1zAsfKeIVL9RugQWFAWthIxzdjaRLHPPOfGJSFgZMGwLtw7GvWtAbJIoH8XL43xiyd7KOH6+oTkR4y/2JneoF4m96prdsYJUYwN0qbM12W1iKfWEIPfDL9nFQNFiBStP+86/I+GLsan1jvhHtVsQ59pMfXK6tmZe8RK4CAEMthH/lzI9zlVrNfCj0pEiR1FXVASJ25np6IMLEZbGsc1njBZ6fZ3iaee6MI6jLWt/lSPX7gLLn4Asq2x4P7/SxUWE= zuul@controller
Dec  3 01:18:34 compute-0 systemd[1]: libpod-c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943.scope: Deactivated successfully.
Dec  3 01:18:34 compute-0 podman[194515]: 2025-12-03 01:18:34.488759227 +0000 UTC m=+0.816391940 container died c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d74fd3f54a13463d1e64678aba0927ef2c07369248defbc204b88bbcea4f58ba-merged.mount: Deactivated successfully.
Dec  3 01:18:34 compute-0 podman[194515]: 2025-12-03 01:18:34.549407173 +0000 UTC m=+0.877039896 container remove c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943 (image=quay.io/ceph/ceph:v18, name=elastic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:18:34 compute-0 systemd[1]: libpod-conmon-c5be07c3c1ee7751cc30bc008f3d3bc595b14297ca1a586e8cf57e7262059943.scope: Deactivated successfully.
Dec  3 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.682211086 +0000 UTC m=+0.091746618 container create e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.645844955 +0000 UTC m=+0.055380557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:34 compute-0 systemd[1]: Started libpod-conmon-e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509.scope.
Dec  3 01:18:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.866763061 +0000 UTC m=+0.276298613 container init e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.888472873 +0000 UTC m=+0.298008395 container start e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:18:34 compute-0 podman[194567]: 2025-12-03 01:18:34.895780442 +0000 UTC m=+0.305315994 container attach e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:18:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053048 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:18:35 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:35 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec  3 01:18:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  3 01:18:35 compute-0 systemd-logind[800]: New session 27 of user ceph-admin.
Dec  3 01:18:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  3 01:18:35 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec  3 01:18:35 compute-0 podman[194611]: 2025-12-03 01:18:35.857114701 +0000 UTC m=+0.140638328 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 01:18:35 compute-0 systemd-logind[800]: New session 29 of user ceph-admin.
Dec  3 01:18:36 compute-0 systemd[194622]: Queued start job for default target Main User Target.
Dec  3 01:18:36 compute-0 systemd[194622]: Created slice User Application Slice.
Dec  3 01:18:36 compute-0 systemd[194622]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 01:18:36 compute-0 systemd[194622]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 01:18:36 compute-0 systemd[194622]: Reached target Paths.
Dec  3 01:18:36 compute-0 systemd[194622]: Reached target Timers.
Dec  3 01:18:36 compute-0 systemd[194622]: Starting D-Bus User Message Bus Socket...
Dec  3 01:18:36 compute-0 systemd[194622]: Starting Create User's Volatile Files and Directories...
Dec  3 01:18:36 compute-0 systemd[194622]: Finished Create User's Volatile Files and Directories.
Dec  3 01:18:36 compute-0 systemd[194622]: Listening on D-Bus User Message Bus Socket.
Dec  3 01:18:36 compute-0 systemd[194622]: Reached target Sockets.
Dec  3 01:18:36 compute-0 systemd[194622]: Reached target Basic System.
Dec  3 01:18:36 compute-0 systemd[194622]: Reached target Main User Target.
Dec  3 01:18:36 compute-0 systemd[194622]: Startup finished in 222ms.
Dec  3 01:18:36 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec  3 01:18:36 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec  3 01:18:36 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec  3 01:18:36 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:36 compute-0 systemd-logind[800]: New session 30 of user ceph-admin.
Dec  3 01:18:36 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec  3 01:18:37 compute-0 systemd-logind[800]: New session 31 of user ceph-admin.
Dec  3 01:18:37 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec  3 01:18:37 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  3 01:18:37 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  3 01:18:37 compute-0 systemd-logind[800]: New session 32 of user ceph-admin.
Dec  3 01:18:38 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec  3 01:18:38 compute-0 ceph-mon[192821]: Deploying cephadm binary to compute-0
Dec  3 01:18:38 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:38 compute-0 systemd-logind[800]: New session 33 of user ceph-admin.
Dec  3 01:18:38 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec  3 01:18:39 compute-0 systemd-logind[800]: New session 34 of user ceph-admin.
Dec  3 01:18:39 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec  3 01:18:39 compute-0 systemd-logind[800]: New session 35 of user ceph-admin.
Dec  3 01:18:39 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec  3 01:18:40 compute-0 podman[194972]: 2025-12-03 01:18:40.089000877 +0000 UTC m=+0.141837953 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release-0.7.12=)
Dec  3 01:18:40 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:18:40 compute-0 systemd-logind[800]: New session 36 of user ceph-admin.
Dec  3 01:18:40 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Dec  3 01:18:41 compute-0 systemd-logind[800]: New session 37 of user ceph-admin.
Dec  3 01:18:41 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Dec  3 01:18:41 compute-0 systemd-logind[800]: New session 38 of user ceph-admin.
Dec  3 01:18:41 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Dec  3 01:18:42 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:42 compute-0 systemd-logind[800]: New session 39 of user ceph-admin.
Dec  3 01:18:42 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Dec  3 01:18:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 01:18:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:43 compute-0 ceph-mgr[193109]: [cephadm INFO root] Added host compute-0
Dec  3 01:18:43 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  3 01:18:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 01:18:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 01:18:43 compute-0 wonderful_lewin[194582]: Added host 'compute-0' with addr '192.168.122.100'
Dec  3 01:18:43 compute-0 systemd[1]: libpod-e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509.scope: Deactivated successfully.
Dec  3 01:18:43 compute-0 podman[195266]: 2025-12-03 01:18:43.409484943 +0000 UTC m=+0.031727660 container died e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c842aa05d05f03c944f0d99286b81f6fb140a3fc0c884271bedcc3484ccc4f6-merged.mount: Deactivated successfully.
Dec  3 01:18:43 compute-0 podman[195266]: 2025-12-03 01:18:43.490825042 +0000 UTC m=+0.113067789 container remove e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509 (image=quay.io/ceph/ceph:v18, name=wonderful_lewin, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:18:43 compute-0 systemd[1]: libpod-conmon-e012d32128b17c0fafac99cf104576a1df9006de548fc44b591b3184c6972509.scope: Deactivated successfully.
Dec  3 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.600999827 +0000 UTC m=+0.061225254 container create 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:18:43 compute-0 systemd[1]: Started libpod-conmon-31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955.scope.
Dec  3 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.578785391 +0000 UTC m=+0.039010858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.767509435 +0000 UTC m=+0.227734902 container init 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.786599741 +0000 UTC m=+0.246825198 container start 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:43 compute-0 podman[195315]: 2025-12-03 01:18:43.793248631 +0000 UTC m=+0.253474088 container attach 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:18:44 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.194644116 +0000 UTC m=+0.100696235 container create 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.156402801 +0000 UTC m=+0.062454970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:44 compute-0 systemd[1]: Started libpod-conmon-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope.
Dec  3 01:18:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:44 compute-0 ceph-mon[192821]: Added host compute-0
Dec  3 01:18:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.335043726 +0000 UTC m=+0.241095835 container init 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.349754068 +0000 UTC m=+0.255806167 container start 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.355146292 +0000 UTC m=+0.261198391 container attach 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:18:44 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:44 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  3 01:18:44 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  3 01:18:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 01:18:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:44 compute-0 agitated_black[195356]: Scheduled mon update...
Dec  3 01:18:44 compute-0 systemd[1]: libpod-31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955.scope: Deactivated successfully.
Dec  3 01:18:44 compute-0 podman[195454]: 2025-12-03 01:18:44.553996656 +0000 UTC m=+0.045296668 container died 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b578146de9772630a26ecc5e7b03c9facff4cf1506e92d1beee06b231cf258c2-merged.mount: Deactivated successfully.
Dec  3 01:18:44 compute-0 podman[195454]: 2025-12-03 01:18:44.618964167 +0000 UTC m=+0.110264169 container remove 31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955 (image=quay.io/ceph/ceph:v18, name=agitated_black, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:18:44 compute-0 systemd[1]: libpod-conmon-31337545b77aed56dfa8bddd617c0a7d0458a629348154b3296aaa32fc0a9955.scope: Deactivated successfully.
Dec  3 01:18:44 compute-0 compassionate_ritchie[195447]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  3 01:18:44 compute-0 systemd[1]: libpod-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope: Deactivated successfully.
Dec  3 01:18:44 compute-0 conmon[195447]: conmon 6e8eefbe0b9cd3223c0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope/container/memory.events
Dec  3 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.708027227 +0000 UTC m=+0.614079386 container died 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5204bc3182ad956b7f2dabf9c3b03b0d6b2eea500775c0a5481485a8ed6d3909-merged.mount: Deactivated successfully.
Dec  3 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.780471642 +0000 UTC m=+0.100566741 container create fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Dec  3 01:18:44 compute-0 podman[195412]: 2025-12-03 01:18:44.816840943 +0000 UTC m=+0.722893032 container remove 6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78 (image=quay.io/ceph/ceph:v18, name=compassionate_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:44 compute-0 systemd[1]: Started libpod-conmon-fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe.scope.
Dec  3 01:18:44 compute-0 systemd[1]: libpod-conmon-6e8eefbe0b9cd3223c0dccf23a09ba66e9e810fdcc21decf0b565b571cd9cf78.scope: Deactivated successfully.
Dec  3 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.751565104 +0000 UTC m=+0.071660233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Dec  3 01:18:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.912298257 +0000 UTC m=+0.232393386 container init fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.932503775 +0000 UTC m=+0.252598884 container start fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 01:18:44 compute-0 podman[195468]: 2025-12-03 01:18:44.937329284 +0000 UTC m=+0.257424403 container attach fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 01:18:44 compute-0 podman[195498]: 2025-12-03 01:18:44.994926283 +0000 UTC m=+0.132496195 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:18:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:18:45 compute-0 ceph-mon[192821]: Saving service mon spec with placement count:5
Dec  3 01:18:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:45 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:45 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  3 01:18:45 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  3 01:18:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 01:18:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:45 compute-0 jolly_nightingale[195496]: Scheduled mgr update...
Dec  3 01:18:45 compute-0 systemd[1]: libpod-fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe.scope: Deactivated successfully.
Dec  3 01:18:45 compute-0 podman[195468]: 2025-12-03 01:18:45.566600944 +0000 UTC m=+0.886696083 container died fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 01:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-172394f8995542d98d07ed233d7f689fb1bc97edc9519758768789dda1ffd64d-merged.mount: Deactivated successfully.
Dec  3 01:18:45 compute-0 podman[195468]: 2025-12-03 01:18:45.641429916 +0000 UTC m=+0.961525025 container remove fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe (image=quay.io/ceph/ceph:v18, name=jolly_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:45 compute-0 systemd[1]: libpod-conmon-fdb8b09c3e853294d1fd85f4f02e2a3818b0627df8904ce6d9caef0dff3949fe.scope: Deactivated successfully.
Dec  3 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.750277153 +0000 UTC m=+0.068453471 container create 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:18:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:18:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.725813813 +0000 UTC m=+0.043990161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:45 compute-0 systemd[1]: Started libpod-conmon-6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c.scope.
Dec  3 01:18:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.89996282 +0000 UTC m=+0.218139198 container init 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.915847855 +0000 UTC m=+0.234024203 container start 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:18:45 compute-0 podman[195666]: 2025-12-03 01:18:45.921952089 +0000 UTC m=+0.240128437 container attach 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:18:46 compute-0 ceph-mgr[193109]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 01:18:46 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:46 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service crash spec with placement *
Dec  3 01:18:46 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  3 01:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  3 01:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:46 compute-0 kind_chandrasekhar[195694]: Scheduled crash update...
Dec  3 01:18:46 compute-0 ceph-mon[192821]: Saving service mgr spec with placement count:2
Dec  3 01:18:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:46 compute-0 systemd[1]: libpod-6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c.scope: Deactivated successfully.
Dec  3 01:18:46 compute-0 podman[195666]: 2025-12-03 01:18:46.555239814 +0000 UTC m=+0.873416172 container died 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1485f5fee1eb9f45d6084223e7cfaa58e75dfc890ec54b0bee1e22896d1a6a-merged.mount: Deactivated successfully.
Dec  3 01:18:46 compute-0 podman[195666]: 2025-12-03 01:18:46.663097443 +0000 UTC m=+0.981273761 container remove 6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c (image=quay.io/ceph/ceph:v18, name=kind_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:18:46 compute-0 systemd[1]: libpod-conmon-6a44e23f7df4167f895b2ecd703ebe9cc9484a07cba83458349024496700446c.scope: Deactivated successfully.
Dec  3 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.798151871 +0000 UTC m=+0.097987327 container create 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.753644296 +0000 UTC m=+0.053479842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:46 compute-0 systemd[1]: Started libpod-conmon-2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1.scope.
Dec  3 01:18:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.939359924 +0000 UTC m=+0.239195400 container init 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.961603061 +0000 UTC m=+0.261438507 container start 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:18:46 compute-0 podman[195848]: 2025-12-03 01:18:46.966054959 +0000 UTC m=+0.265890425 container attach 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 01:18:47 compute-0 podman[195910]: 2025-12-03 01:18:47.193821021 +0000 UTC m=+0.122476308 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Dec  3 01:18:47 compute-0 podman[195910]: 2025-12-03 01:18:47.533302132 +0000 UTC m=+0.461957349 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1756499979' entity='client.admin' 
Dec  3 01:18:47 compute-0 ceph-mon[192821]: Saving service crash spec with placement *
Dec  3 01:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:47 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1756499979' entity='client.admin' 
Dec  3 01:18:47 compute-0 systemd[1]: libpod-2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1.scope: Deactivated successfully.
Dec  3 01:18:47 compute-0 podman[195848]: 2025-12-03 01:18:47.573239205 +0000 UTC m=+0.873074681 container died 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cacc7767cc3a920c1752ba4a14070bfedebf64769de9e49fe559c59eee0ef6b-merged.mount: Deactivated successfully.
Dec  3 01:18:47 compute-0 podman[195848]: 2025-12-03 01:18:47.648272184 +0000 UTC m=+0.948107630 container remove 2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1 (image=quay.io/ceph/ceph:v18, name=suspicious_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec  3 01:18:47 compute-0 systemd[1]: libpod-conmon-2929ece3817eff21b4d907bf502ddffa4220e72c190ecab2371d23ed8feb8da1.scope: Deactivated successfully.
Dec  3 01:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.739461095 +0000 UTC m=+0.065179147 container create ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:18:47 compute-0 systemd[1]: Started libpod-conmon-ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646.scope.
Dec  3 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.705427101 +0000 UTC m=+0.031145203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.891839249 +0000 UTC m=+0.217557291 container init ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.9002548 +0000 UTC m=+0.225972832 container start ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:18:47 compute-0 podman[195985]: 2025-12-03 01:18:47.905120349 +0000 UTC m=+0.230838371 container attach ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:48 compute-0 ceph-mgr[193109]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  3 01:18:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:18:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  3 01:18:48 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 196139 (sysctl)
Dec  3 01:18:48 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  3 01:18:48 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  3 01:18:48 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Dec  3 01:18:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:48 compute-0 systemd[1]: libpod-ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646.scope: Deactivated successfully.
Dec  3 01:18:48 compute-0 podman[196147]: 2025-12-03 01:18:48.600395539 +0000 UTC m=+0.067435022 container died ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-38c01183687b400de10e62e6c485c9d5ae5c7b303dffac0d8d5b8348ef90fca3-merged.mount: Deactivated successfully.
Dec  3 01:18:48 compute-0 podman[196147]: 2025-12-03 01:18:48.670467096 +0000 UTC m=+0.137506509 container remove ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646 (image=quay.io/ceph/ceph:v18, name=amazing_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:18:48 compute-0 systemd[1]: libpod-conmon-ce75635221c396a77977b1c48b6cbef77afe59b2dba24f4c7c8e4d4bae819646.scope: Deactivated successfully.
Dec  3 01:18:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:48 compute-0 ceph-mon[192821]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  3 01:18:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.785732237 +0000 UTC m=+0.074341740 container create 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.754013218 +0000 UTC m=+0.042622801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:48 compute-0 systemd[1]: Started libpod-conmon-53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d.scope.
Dec  3 01:18:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.931791869 +0000 UTC m=+0.220401372 container init 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.943935137 +0000 UTC m=+0.232544680 container start 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:48 compute-0 podman[196166]: 2025-12-03 01:18:48.950122724 +0000 UTC m=+0.238732247 container attach 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 01:18:49 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:18:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 01:18:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:49 compute-0 ceph-mgr[193109]: [cephadm INFO root] Added label _admin to host compute-0
Dec  3 01:18:49 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  3 01:18:49 compute-0 goofy_ptolemy[196193]: Added label _admin to host compute-0
Dec  3 01:18:49 compute-0 systemd[1]: libpod-53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d.scope: Deactivated successfully.
Dec  3 01:18:49 compute-0 podman[196166]: 2025-12-03 01:18:49.539364438 +0000 UTC m=+0.827974021 container died 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-480c9b9eed57719c146fb487465dee6f57d1bd0f70bbdb458cf18c53ba7558ed-merged.mount: Deactivated successfully.
Dec  3 01:18:49 compute-0 podman[196166]: 2025-12-03 01:18:49.613083669 +0000 UTC m=+0.901693222 container remove 53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d (image=quay.io/ceph/ceph:v18, name=goofy_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:18:49 compute-0 systemd[1]: libpod-conmon-53ae2e7369ee1568c5816d3c0797d4a6d107a370fef4df44af6404b13dc5d60d.scope: Deactivated successfully.
Dec  3 01:18:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:18:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.712770334 +0000 UTC m=+0.055172101 container create d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:49 compute-0 systemd[1]: Started libpod-conmon-d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f.scope.
Dec  3 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.690379532 +0000 UTC m=+0.032781319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.866708442 +0000 UTC m=+0.209110219 container init d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.874202656 +0000 UTC m=+0.216604413 container start d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:49 compute-0 podman[196346]: 2025-12-03 01:18:49.879360364 +0000 UTC m=+0.221762131 container attach d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:18:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:18:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:18:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Dec  3 01:18:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211957164' entity='client.admin' 
Dec  3 01:18:50 compute-0 systemd[1]: libpod-d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f.scope: Deactivated successfully.
Dec  3 01:18:50 compute-0 podman[196346]: 2025-12-03 01:18:50.503435135 +0000 UTC m=+0.845836912 container died d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7e3af88523476c6b6f697818c6752a1f4ac9ca72978fd199ee8fd84a0abe99-merged.mount: Deactivated successfully.
Dec  3 01:18:50 compute-0 podman[196346]: 2025-12-03 01:18:50.606674062 +0000 UTC m=+0.949075809 container remove d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f (image=quay.io/ceph/ceph:v18, name=busy_greider, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 01:18:50 compute-0 systemd[1]: libpod-conmon-d45a036d82a496d4409d3b996677582147aceb330b052b2372ac0bbe0df0443f.scope: Deactivated successfully.
Dec  3 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.687893468 +0000 UTC m=+0.058646221 container create a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:18:50 compute-0 systemd[1]: Started libpod-conmon-a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33.scope.
Dec  3 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.654221173 +0000 UTC m=+0.024973926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:50 compute-0 ceph-mon[192821]: Added label _admin to host compute-0
Dec  3 01:18:50 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/211957164' entity='client.admin' 
Dec  3 01:18:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.822179753 +0000 UTC m=+0.192932566 container init a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.837189693 +0000 UTC m=+0.207942446 container start a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 01:18:50 compute-0 podman[196521]: 2025-12-03 01:18:50.844116231 +0000 UTC m=+0.214868984 container attach a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:50 compute-0 podman[196554]: 2025-12-03 01:18:50.989397151 +0000 UTC m=+0.086409075 container create 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:50.957414605 +0000 UTC m=+0.054426579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:18:51 compute-0 systemd[1]: Started libpod-conmon-3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add.scope.
Dec  3 01:18:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.129821492 +0000 UTC m=+0.226833466 container init 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.147388455 +0000 UTC m=+0.244400369 container start 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:18:51 compute-0 musing_neumann[196570]: 167 167
Dec  3 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.153482829 +0000 UTC m=+0.250494813 container attach 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:51 compute-0 systemd[1]: libpod-3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add.scope: Deactivated successfully.
Dec  3 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.158600546 +0000 UTC m=+0.255612520 container died 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb3325b07966b12e0194ae71868b7d851bbdc1eb1083cc4573590477726172e-merged.mount: Deactivated successfully.
Dec  3 01:18:51 compute-0 podman[196554]: 2025-12-03 01:18:51.224865124 +0000 UTC m=+0.321877018 container remove 3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_neumann, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:18:51 compute-0 systemd[1]: libpod-conmon-3705ecc855db4f0e2e6dd705dccb21a02ec411a98d8e2031865fc2760b7b9add.scope: Deactivated successfully.
Dec  3 01:18:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Dec  3 01:18:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2354559620' entity='client.admin' 
Dec  3 01:18:51 compute-0 charming_haibt[196549]: set mgr/dashboard/cluster/status
Dec  3 01:18:51 compute-0 systemd[1]: libpod-a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33.scope: Deactivated successfully.
Dec  3 01:18:51 compute-0 podman[196606]: 2025-12-03 01:18:51.628403789 +0000 UTC m=+0.047475390 container died a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bc935939c322b2366642a68ce39975c65860264780960cbdc49f88f5349f613-merged.mount: Deactivated successfully.
Dec  3 01:18:51 compute-0 podman[196606]: 2025-12-03 01:18:51.698823016 +0000 UTC m=+0.117894577 container remove a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33 (image=quay.io/ceph/ceph:v18, name=charming_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:18:51 compute-0 systemd[1]: libpod-conmon-a9225dda4944cce0e52fe7bcd26fd4a59312560cf3cfae794484e8c33525ad33.scope: Deactivated successfully.
Dec  3 01:18:51 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2354559620' entity='client.admin' 
Dec  3 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.067197515 +0000 UTC m=+0.088504336 container create c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.034163879 +0000 UTC m=+0.055470730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:18:52 compute-0 systemd[1]: Started libpod-conmon-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope.
Dec  3 01:18:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:18:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.237883393 +0000 UTC m=+0.259190174 container init c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.265234676 +0000 UTC m=+0.286541467 container start c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:18:52 compute-0 podman[196625]: 2025-12-03 01:18:52.27132541 +0000 UTC m=+0.292632561 container attach c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:52 compute-0 python3[196672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.568084638 +0000 UTC m=+0.102912508 container create 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.527719173 +0000 UTC m=+0.062547093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:52 compute-0 systemd[1]: Started libpod-conmon-2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa.scope.
Dec  3 01:18:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f838aa5584e7cf1be4985ee1319f9f4aaa2f36560cd8b2259a8221b89830b19b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f838aa5584e7cf1be4985ee1319f9f4aaa2f36560cd8b2259a8221b89830b19b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.772817291 +0000 UTC m=+0.307645181 container init 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.788693456 +0000 UTC m=+0.323521306 container start 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:18:52 compute-0 podman[196673]: 2025-12-03 01:18:52.794977006 +0000 UTC m=+0.329804886 container attach 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Dec  3 01:18:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003315585' entity='client.admin' 
Dec  3 01:18:53 compute-0 systemd[1]: libpod-2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa.scope: Deactivated successfully.
Dec  3 01:18:53 compute-0 podman[196673]: 2025-12-03 01:18:53.446438911 +0000 UTC m=+0.981266811 container died 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f838aa5584e7cf1be4985ee1319f9f4aaa2f36560cd8b2259a8221b89830b19b-merged.mount: Deactivated successfully.
Dec  3 01:18:53 compute-0 podman[196673]: 2025-12-03 01:18:53.535047949 +0000 UTC m=+1.069875799 container remove 2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa (image=quay.io/ceph/ceph:v18, name=gallant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:53 compute-0 systemd[1]: libpod-conmon-2290f2aa86a835896aeffe38502e9046951b56a81a55986fc50bbdadaab0b2aa.scope: Deactivated successfully.
Dec  3 01:18:53 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4003315585' entity='client.admin' 
Dec  3 01:18:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:18:54 compute-0 elegant_mclean[196645]: [
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:    {
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "available": false,
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "ceph_device": false,
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "lsm_data": {},
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "lvs": [],
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "path": "/dev/sr0",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "rejected_reasons": [
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "Has a FileSystem",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "Insufficient space (<5GB)"
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        ],
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        "sys_api": {
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "actuators": null,
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "device_nodes": "sr0",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "devname": "sr0",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "human_readable_size": "482.00 KB",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "id_bus": "ata",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "model": "QEMU DVD-ROM",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "nr_requests": "2",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "parent": "/dev/sr0",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "partitions": {},
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "path": "/dev/sr0",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "removable": "1",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "rev": "2.5+",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "ro": "0",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "rotational": "1",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "sas_address": "",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "sas_device_handle": "",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "scheduler_mode": "mq-deadline",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "sectors": 0,
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "sectorsize": "2048",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "size": 493568.0,
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "support_discard": "2048",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "type": "disk",
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:            "vendor": "QEMU"
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:        }
Dec  3 01:18:54 compute-0 elegant_mclean[196645]:    }
Dec  3 01:18:54 compute-0 elegant_mclean[196645]: ]
Dec  3 01:18:54 compute-0 systemd[1]: libpod-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope: Deactivated successfully.
Dec  3 01:18:54 compute-0 systemd[1]: libpod-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope: Consumed 2.190s CPU time.
Dec  3 01:18:54 compute-0 podman[196625]: 2025-12-03 01:18:54.425762135 +0000 UTC m=+2.447068986 container died c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d61c327c9b180440e660d07b1d252b636b753de89c5e59a3ae97e8104f1545e-merged.mount: Deactivated successfully.
Dec  3 01:18:54 compute-0 podman[196625]: 2025-12-03 01:18:54.531123802 +0000 UTC m=+2.552430603 container remove c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:54 compute-0 systemd[1]: libpod-conmon-c820a3ba15f95b90bf9ea1322639d5f258c9cf966923fb03e353e18742859af0.scope: Deactivated successfully.
Dec  3 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:18:54 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  3 01:18:54 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  3 01:18:54 compute-0 ansible-async_wrapper.py[198733]: Invoked with j723132833440 30 /home/zuul/.ansible/tmp/ansible-tmp-1764724733.9468582-37007-50407713382505/AnsiballZ_command.py _
Dec  3 01:18:54 compute-0 ansible-async_wrapper.py[198782]: Starting module and watcher
Dec  3 01:18:54 compute-0 ansible-async_wrapper.py[198782]: Start watching 198783 (30)
Dec  3 01:18:54 compute-0 ansible-async_wrapper.py[198783]: Start module (198783)
Dec  3 01:18:54 compute-0 ansible-async_wrapper.py[198733]: Return async_wrapper task started.
Dec  3 01:18:55 compute-0 python3[198784]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:18:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.257813421 +0000 UTC m=+0.071801307 container create 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.22843542 +0000 UTC m=+0.042423306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:55 compute-0 systemd[1]: Started libpod-conmon-7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d.scope.
Dec  3 01:18:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbde8df17b6ac9709b04ae729ae70c4eaa216bb8ead2372f452cb249bbcf89a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbde8df17b6ac9709b04ae729ae70c4eaa216bb8ead2372f452cb249bbcf89a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.418644797 +0000 UTC m=+0.232632753 container init 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.433992076 +0000 UTC m=+0.247979942 container start 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:18:55 compute-0 podman[198815]: 2025-12-03 01:18:55.439392631 +0000 UTC m=+0.253380517 container attach 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:18:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:18:55 compute-0 ceph-mon[192821]: Updating compute-0:/etc/ceph/ceph.conf
Dec  3 01:18:56 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 01:18:56 compute-0 focused_cray[198858]: 
Dec  3 01:18:56 compute-0 focused_cray[198858]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 01:18:56 compute-0 systemd[1]: libpod-7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d.scope: Deactivated successfully.
Dec  3 01:18:56 compute-0 podman[198815]: 2025-12-03 01:18:56.037435857 +0000 UTC m=+0.851423713 container died 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbde8df17b6ac9709b04ae729ae70c4eaa216bb8ead2372f452cb249bbcf89a8-merged.mount: Deactivated successfully.
Dec  3 01:18:56 compute-0 podman[198815]: 2025-12-03 01:18:56.129905655 +0000 UTC m=+0.943893511 container remove 7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d (image=quay.io/ceph/ceph:v18, name=focused_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:18:56 compute-0 systemd[1]: libpod-conmon-7d35d7d3b88143b6acad947a27a5056cb3a432a1459eed72e8e3b26f6772f15d.scope: Deactivated successfully.
Dec  3 01:18:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:18:56 compute-0 ansible-async_wrapper.py[198783]: Module complete (198783)
Dec  3 01:18:56 compute-0 python3[199163]: ansible-ansible.legacy.async_status Invoked with jid=j723132833440.198733 mode=status _async_dir=/root/.ansible_async
Dec  3 01:18:56 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf
Dec  3 01:18:56 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf
Dec  3 01:18:57 compute-0 python3[199295]: ansible-ansible.legacy.async_status Invoked with jid=j723132833440.198733 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 01:18:57 compute-0 ceph-mon[192821]: Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.conf
Dec  3 01:18:57 compute-0 python3[199442]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:18:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:18:58 compute-0 auditd[706]: Audit daemon rotating log files
Dec  3 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:18:58 compute-0 python3[199615]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.540638928 +0000 UTC m=+0.093633062 container create 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.503643999 +0000 UTC m=+0.056638173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:18:58 compute-0 systemd[1]: Started libpod-conmon-01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62.scope.
Dec  3 01:18:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.681129901 +0000 UTC m=+0.234124085 container init 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.699398765 +0000 UTC m=+0.252392889 container start 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:18:58 compute-0 podman[199642]: 2025-12-03 01:18:58.705967533 +0000 UTC m=+0.258961657 container attach 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:18:59 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 01:18:59 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 01:18:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 01:18:59 compute-0 flamboyant_matsumoto[199684]: 
Dec  3 01:18:59 compute-0 flamboyant_matsumoto[199684]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 01:18:59 compute-0 systemd[1]: libpod-01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62.scope: Deactivated successfully.
Dec  3 01:18:59 compute-0 podman[199642]: 2025-12-03 01:18:59.311117432 +0000 UTC m=+0.864111556 container died 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b42ad78bea4fa7c2789aec3d10ac30a430d7bd7aa54e754a40004eaf81b18ead-merged.mount: Deactivated successfully.
Dec  3 01:18:59 compute-0 podman[199642]: 2025-12-03 01:18:59.397715502 +0000 UTC m=+0.950709606 container remove 01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62 (image=quay.io/ceph/ceph:v18, name=flamboyant_matsumoto, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 01:18:59 compute-0 systemd[1]: libpod-conmon-01d1de545c347484e9eea556ca83d96dbf732995fa1cc74f1b896c015dae4f62.scope: Deactivated successfully.
Dec  3 01:18:59 compute-0 podman[158098]: time="2025-12-03T01:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22105 "" "Go-http-client/1.1"
Dec  3 01:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3950 "" "Go-http-client/1.1"
Dec  3 01:18:59 compute-0 python3[199968]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:18:59 compute-0 ansible-async_wrapper.py[198782]: Done in kid B.
Dec  3 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.04963092 +0000 UTC m=+0.088293399 container create 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.025470828 +0000 UTC m=+0.064133297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:19:00 compute-0 systemd[1]: Started libpod-conmon-4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751.scope.
Dec  3 01:19:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.198613137 +0000 UTC m=+0.237275656 container init 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.226340531 +0000 UTC m=+0.265002990 container start 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 01:19:00 compute-0 podman[199994]: 2025-12-03 01:19:00.233177736 +0000 UTC m=+0.271840255 container attach 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:19:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:00 compute-0 ceph-mon[192821]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 01:19:00 compute-0 podman[200162]: 2025-12-03 01:19:00.862021574 +0000 UTC m=+0.106205122 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:19:00 compute-0 podman[200169]: 2025-12-03 01:19:00.872083792 +0000 UTC m=+0.108028694 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec  3 01:19:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Dec  3 01:19:00 compute-0 podman[200171]: 2025-12-03 01:19:00.887050491 +0000 UTC m=+0.113539263 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Dec  3 01:19:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/457463382' entity='client.admin' 
Dec  3 01:19:00 compute-0 systemd[1]: libpod-4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751.scope: Deactivated successfully.
Dec  3 01:19:00 compute-0 podman[200177]: 2025-12-03 01:19:00.939813322 +0000 UTC m=+0.163858713 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 01:19:00 compute-0 podman[200294]: 2025-12-03 01:19:00.962041938 +0000 UTC m=+0.033040077 container died 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f210e829118c98b3fbd7e8bd25107fbac4d644e00d8b5ec410f31dc370544fd9-merged.mount: Deactivated successfully.
Dec  3 01:19:01 compute-0 podman[200294]: 2025-12-03 01:19:01.016064285 +0000 UTC m=+0.087062434 container remove 4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751 (image=quay.io/ceph/ceph:v18, name=exciting_goldwasser, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:01 compute-0 systemd[1]: libpod-conmon-4260122c567d71c9f04b0b0f17a502267816ef8525a1faf3fd35559b6702b751.scope: Deactivated successfully.
Dec  3 01:19:01 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring
Dec  3 01:19:01 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring
Dec  3 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:19:01 compute-0 python3[200427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:19:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:19:01 compute-0 openstack_network_exporter[160250]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:19:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.529409556 +0000 UTC m=+0.074668780 container create 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.501593209 +0000 UTC m=+0.046852423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:19:01 compute-0 systemd[1]: Started libpod-conmon-2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db.scope.
Dec  3 01:19:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.65108439 +0000 UTC m=+0.196343584 container init 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.67168111 +0000 UTC m=+0.216940304 container start 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:01 compute-0 podman[200453]: 2025-12-03 01:19:01.676053624 +0000 UTC m=+0.221312808 container attach 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:01 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/457463382' entity='client.admin' 
Dec  3 01:19:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Dec  3 01:19:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3890487829' entity='client.admin' 
Dec  3 01:19:02 compute-0 systemd[1]: libpod-2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db.scope: Deactivated successfully.
Dec  3 01:19:02 compute-0 podman[200453]: 2025-12-03 01:19:02.270860147 +0000 UTC m=+0.816119441 container died 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-47580e952cfc0f7f6faa1a11fc1e19d39e6d2ada54a2ffe0ccbd11e9e80533a4-merged.mount: Deactivated successfully.
Dec  3 01:19:02 compute-0 podman[200453]: 2025-12-03 01:19:02.362642655 +0000 UTC m=+0.907901849 container remove 2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db (image=quay.io/ceph/ceph:v18, name=zealous_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:19:02 compute-0 systemd[1]: libpod-conmon-2d6364d302020bbdf74b47f068aeec5f042d0aa09e57bfe9ba623fc5497453db.scope: Deactivated successfully.
Dec  3 01:19:02 compute-0 python3[200755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:19:02 compute-0 ceph-mon[192821]: Updating compute-0:/var/lib/ceph/3765feb2-36f8-5b86-b74c-64e9221f9c4c/config/ceph.client.admin.keyring
Dec  3 01:19:02 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3890487829' entity='client.admin' 
Dec  3 01:19:02 compute-0 podman[200784]: 2025-12-03 01:19:02.938172316 +0000 UTC m=+0.097994367 container create 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 01:19:02 compute-0 podman[200784]: 2025-12-03 01:19:02.902847445 +0000 UTC m=+0.062669556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:19:03 compute-0 systemd[1]: Started libpod-conmon-7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038.scope.
Dec  3 01:19:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:03 compute-0 podman[200784]: 2025-12-03 01:19:03.094992487 +0000 UTC m=+0.254814538 container init 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:19:03 compute-0 podman[200784]: 2025-12-03 01:19:03.117199353 +0000 UTC m=+0.277021394 container start 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:03 compute-0 podman[200784]: 2025-12-03 01:19:03.12374267 +0000 UTC m=+0.283564761 container attach 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:03 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev bc9c9826-325b-4f1a-99c8-11b99d20a9ef (Updating crash deployment (+1 -> 1))
Dec  3 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec  3 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  3 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  3 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:19:03 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  3 01:19:03 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  3 01:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Dec  3 01:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  3 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  3 01:19:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  3 01:19:03 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  3 01:19:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  3 01:19:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  3 01:19:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  3 01:19:04 compute-0 happy_mclean[200838]: set require_min_compat_client to mimic
Dec  3 01:19:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  3 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.693813665 +0000 UTC m=+0.078656019 container create d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:04 compute-0 systemd[1]: libpod-7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038.scope: Deactivated successfully.
Dec  3 01:19:04 compute-0 podman[200784]: 2025-12-03 01:19:04.700931868 +0000 UTC m=+1.860753909 container died 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.661522207 +0000 UTC m=+0.046364611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:04 compute-0 systemd[1]: Started libpod-conmon-d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a.scope.
Dec  3 01:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d6785f5b2446ff6cc6713b6d79ca5e12e60708148887feec57cab2e5e7a458-merged.mount: Deactivated successfully.
Dec  3 01:19:04 compute-0 podman[200784]: 2025-12-03 01:19:04.795359012 +0000 UTC m=+1.955181033 container remove 7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038 (image=quay.io/ceph/ceph:v18, name=happy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:19:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:04 compute-0 systemd[1]: libpod-conmon-7db5a3a4a34f47722367833a829ef5d158958db0c15b009f1a786fd69ae1a038.scope: Deactivated successfully.
Dec  3 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.834201849 +0000 UTC m=+0.219044243 container init d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.849361468 +0000 UTC m=+0.234203792 container start d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.854663814 +0000 UTC m=+0.239506208 container attach d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:19:04 compute-0 agitated_wright[201129]: 167 167
Dec  3 01:19:04 compute-0 systemd[1]: libpod-d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a.scope: Deactivated successfully.
Dec  3 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.861688734 +0000 UTC m=+0.246531058 container died d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5bdf2329d7c2ea74b7a6855304786106879e03154c209336dafa3b371de0f4-merged.mount: Deactivated successfully.
Dec  3 01:19:04 compute-0 ceph-mon[192821]: Deploying daemon crash.compute-0 on compute-0
Dec  3 01:19:04 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3216409173' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  3 01:19:04 compute-0 podman[201104]: 2025-12-03 01:19:04.934938424 +0000 UTC m=+0.319780748 container remove d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 01:19:04 compute-0 systemd[1]: libpod-conmon-d0363b4c46e03bf6783a6cc95680c146b7e98508421a82fa3b554439dc0b9b4a.scope: Deactivated successfully.
Dec  3 01:19:05 compute-0 systemd[1]: Reloading.
Dec  3 01:19:05 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:05 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:05 compute-0 systemd[1]: Reloading.
Dec  3 01:19:05 compute-0 python3[201212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:19:05 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:05 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:05 compute-0 podman[201234]: 2025-12-03 01:19:05.702030821 +0000 UTC m=+0.062026583 container create bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:05 compute-0 podman[201234]: 2025-12-03 01:19:05.679253396 +0000 UTC m=+0.039249168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:19:05 compute-0 systemd[1]: Started libpod-conmon-bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d.scope.
Dec  3 01:19:05 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:19:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:05 compute-0 podman[201234]: 2025-12-03 01:19:05.975217582 +0000 UTC m=+0.335213324 container init bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:19:06 compute-0 podman[201234]: 2025-12-03 01:19:06.003403866 +0000 UTC m=+0.363399638 container start bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:06 compute-0 podman[201234]: 2025-12-03 01:19:06.017961829 +0000 UTC m=+0.377957581 container attach bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:19:06 compute-0 podman[201267]: 2025-12-03 01:19:06.108246187 +0000 UTC m=+0.173052643 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.361290481 +0000 UTC m=+0.085180467 container create d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.32268587 +0000 UTC m=+0.046575906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fde572be95ff7fddc08501c609284090b33ad8da703642f5821a51fde595a69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.559490188 +0000 UTC m=+0.283380224 container init d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:06 compute-0 podman[201334]: 2025-12-03 01:19:06.574021161 +0000 UTC m=+0.297911147 container start d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:19:06 compute-0 bash[201334]: d1d072b9d1367535ea9a97a406976e46273f77b5fff1017f3092157bee37d42b
Dec  3 01:19:06 compute-0 systemd[1]: Started Ceph crash.compute-0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev bc9c9826-325b-4f1a-99c8-11b99d20a9ef (Updating crash deployment (+1 -> 1))
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event bc9c9826-325b-4f1a-99c8-11b99d20a9ef (Updating crash deployment (+1 -> 1)) in 3 seconds
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e172a726-e37c-4dea-ac57-a1fbabd58d38 does not exist
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 2279a671-6d1b-4c56-8cb2-1d26cdb52ade (Updating mgr deployment (+1 -> 2))
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 01:19:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:19:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.jzzeoa on compute-0
Dec  3 01:19:06 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.jzzeoa on compute-0
Dec  3 01:19:06 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:06.998+0000 7fb179b59640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:06.998+0000 7fb179b59640 -1 AuthRegistry(0x7fb174066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.000+0000 7fb179b59640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.000+0000 7fb179b59640 -1 AuthRegistry(0x7fb179b58000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.003+0000 7fb1737fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: 2025-12-03T01:19:07.004+0000 7fb179b59640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  3 01:19:07 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-crash-compute-0[201368]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  3 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  3 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Added host compute-0
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec  3 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec  3 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec  3 01:19:07 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec  3 01:19:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Dec  3 01:19:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 youthful_cerf[201268]: Added host 'compute-0' with addr '192.168.122.100'
Dec  3 01:19:07 compute-0 youthful_cerf[201268]: Scheduled mon update...
Dec  3 01:19:07 compute-0 youthful_cerf[201268]: Scheduled mgr update...
Dec  3 01:19:07 compute-0 youthful_cerf[201268]: Scheduled osd.default_drive_group update...
Dec  3 01:19:07 compute-0 systemd[1]: libpod-bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d.scope: Deactivated successfully.
Dec  3 01:19:07 compute-0 podman[201234]: 2025-12-03 01:19:07.527486052 +0000 UTC m=+1.887481814 container died bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-795397e5f2936f14d9c62d20c61d4d422ae6fa78cc96eb44569d98c37f4195a3-merged.mount: Deactivated successfully.
Dec  3 01:19:07 compute-0 podman[201234]: 2025-12-03 01:19:07.64506782 +0000 UTC m=+2.005063592 container remove bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d (image=quay.io/ceph/ceph:v18, name=youthful_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzzeoa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  3 01:19:07 compute-0 ceph-mon[192821]: Deploying daemon mgr.compute-0.jzzeoa on compute-0
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:07 compute-0 systemd[1]: libpod-conmon-bbdc0ccc0b78a416887e59b974abc2f56142a88cdd27fe7f336c8d8f565bac2d.scope: Deactivated successfully.
Dec  3 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.817981468 +0000 UTC m=+0.086728067 container create 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.783188925 +0000 UTC m=+0.051935524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:07 compute-0 systemd[1]: Started libpod-conmon-1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e.scope.
Dec  3 01:19:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.937400023 +0000 UTC m=+0.206146622 container init 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.954946523 +0000 UTC m=+0.223693112 container start 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:19:07 compute-0 nostalgic_shamir[201669]: 167 167
Dec  3 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.960385593 +0000 UTC m=+0.229132162 container attach 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:19:07 compute-0 systemd[1]: libpod-1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e.scope: Deactivated successfully.
Dec  3 01:19:07 compute-0 podman[201653]: 2025-12-03 01:19:07.962114007 +0000 UTC m=+0.230860596 container died 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a253c3d1122e38e5cb81a6bfc860786f0c4523255b78bc2d1db4f2553753afbe-merged.mount: Deactivated successfully.
Dec  3 01:19:08 compute-0 podman[201653]: 2025-12-03 01:19:08.02494509 +0000 UTC m=+0.293691649 container remove 1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:08 compute-0 systemd[1]: libpod-conmon-1e2260d693b6cd29eab623f955a8aebbe80c20ceaeebc45edebc8bd1d515d41e.scope: Deactivated successfully.
Dec  3 01:19:08 compute-0 systemd[1]: Reloading.
Dec  3 01:19:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:08 compute-0 python3[201712]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 01:19:08 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 1 completed events
Dec  3 01:19:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.405966579 +0000 UTC m=+0.105384446 container create 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.364337881 +0000 UTC m=+0.063755818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:19:08 compute-0 systemd[1]: Started libpod-conmon-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope.
Dec  3 01:19:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:08 compute-0 systemd[1]: Reloading.
Dec  3 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.602689228 +0000 UTC m=+0.302107155 container init 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.643748732 +0000 UTC m=+0.343166579 container start 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:08 compute-0 podman[201749]: 2025-12-03 01:19:08.649224072 +0000 UTC m=+0.348641939 container attach 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:19:08 compute-0 ceph-mon[192821]: Added host compute-0
Dec  3 01:19:08 compute-0 ceph-mon[192821]: Saving service mon spec with placement compute-0
Dec  3 01:19:08 compute-0 ceph-mon[192821]: Saving service mgr spec with placement compute-0
Dec  3 01:19:08 compute-0 ceph-mon[192821]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 01:19:08 compute-0 ceph-mon[192821]: Saving service osd.default_drive_group spec with placement compute-0
Dec  3 01:19:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:09 compute-0 systemd[1]: Starting Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953033915' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 01:19:09 compute-0 affectionate_satoshi[201767]: 
Dec  3 01:19:09 compute-0 affectionate_satoshi[201767]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":89,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-03T01:17:36.090330+0000","services":{}},"progress_events":{"2279a671-6d1b-4c56-8cb2-1d26cdb52ade":{"message":"Updating mgr deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  3 01:19:09 compute-0 systemd[1]: libpod-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope: Deactivated successfully.
Dec  3 01:19:09 compute-0 conmon[201767]: conmon 4d6894c58cf6802add4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope/container/memory.events
Dec  3 01:19:09 compute-0 podman[201749]: 2025-12-03 01:19:09.33217492 +0000 UTC m=+1.031592797 container died 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 01:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ab02870b6767d9d2d09e6eef59317def8fbb94d791dc5d365c78f239be17b5-merged.mount: Deactivated successfully.
Dec  3 01:19:09 compute-0 podman[201749]: 2025-12-03 01:19:09.429767814 +0000 UTC m=+1.129185661 container remove 4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac (image=quay.io/ceph/ceph:v18, name=affectionate_satoshi, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:19:09 compute-0 systemd[1]: libpod-conmon-4d6894c58cf6802add4d543fb06e1a9b1d876ff7d3600cf1288c1f9017347cac.scope: Deactivated successfully.
Dec  3 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.463164982 +0000 UTC m=+0.115286990 container create 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.423847362 +0000 UTC m=+0.075969450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef/merged/var/lib/ceph/mgr/ceph-compute-0.jzzeoa supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.588855278 +0000 UTC m=+0.240977316 container init 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:09 compute-0 podman[201875]: 2025-12-03 01:19:09.607360582 +0000 UTC m=+0.259482630 container start 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:09 compute-0 bash[201875]: 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831
Dec  3 01:19:09 compute-0 systemd[1]: Started Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:19:09 compute-0 ceph-mgr[201906]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:19:09 compute-0 ceph-mgr[201906]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  3 01:19:09 compute-0 ceph-mgr[201906]: pidfile_write: ignore empty --pid-file
Dec  3 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:09 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 2279a671-6d1b-4c56-8cb2-1d26cdb52ade (Updating mgr deployment (+1 -> 2))
Dec  3 01:19:09 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 2279a671-6d1b-4c56-8cb2-1d26cdb52ade (Updating mgr deployment (+1 -> 2)) in 3 seconds
Dec  3 01:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 01:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:09 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'alerts'
Dec  3 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'balancer'
Dec  3 01:19:10 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa[201902]: 2025-12-03T01:19:10.149+0000 7fcf5ea18140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  3 01:19:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:10 compute-0 podman[202031]: 2025-12-03 01:19:10.400840087 +0000 UTC m=+0.130904800 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 01:19:10 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa[201902]: 2025-12-03T01:19:10.417+0000 7fcf5ea18140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  3 01:19:10 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'cephadm'
Dec  3 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:11 compute-0 podman[202167]: 2025-12-03 01:19:11.384665208 +0000 UTC m=+0.140670751 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:19:11 compute-0 podman[202167]: 2025-12-03 01:19:11.483186747 +0000 UTC m=+0.239192310 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 27546808-74c7-4256-a5e6-9d0b617f7c05 does not exist
Dec  3 01:19:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  3 01:19:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 0455ede0-4afe-4f3f-8d3d-e4ebe79e5b6a (Updating mgr deployment (-1 -> 1))
Dec  3 01:19:12 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.jzzeoa from compute-0 -- ports [8765]
Dec  3 01:19:12 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.jzzeoa from compute-0 -- ports [8765]
Dec  3 01:19:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:12 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'crash'
Dec  3 01:19:12 compute-0 ceph-mgr[201906]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 01:19:12 compute-0 ceph-mgr[201906]: mgr[py] Loading python module 'dashboard'
Dec  3 01:19:12 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa[201902]: 2025-12-03T01:19:12.793+0000 7fcf5ea18140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  3 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:12 compute-0 ceph-mon[192821]: Removing daemon mgr.compute-0.jzzeoa from compute-0 -- ports [8765]
Dec  3 01:19:13 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:19:13 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 2 completed events
Dec  3 01:19:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 01:19:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:13 compute-0 podman[202431]: 2025-12-03 01:19:13.370847543 +0000 UTC m=+0.116251564 container died 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 01:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e8fcc06e1885822a9d3ffab6363ba10f22d6422768f1836efa9b4ba39f3bbef-merged.mount: Deactivated successfully.
Dec  3 01:19:13 compute-0 podman[202431]: 2025-12-03 01:19:13.433098311 +0000 UTC m=+0.178502332 container remove 5bef9a27fca951228d9f01ffa5d61aa5c3f327c76fda497f735c2a15ffe82831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:13 compute-0 bash[202431]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-jzzeoa
Dec  3 01:19:13 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.jzzeoa.service: Main process exited, code=exited, status=143/n/a
Dec  3 01:19:13 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.jzzeoa.service: Failed with result 'exit-code'.
Dec  3 01:19:13 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.jzzeoa for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:19:13 compute-0 systemd[1]: ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.jzzeoa.service: Consumed 5.464s CPU time.
Dec  3 01:19:13 compute-0 systemd[1]: Reloading.
Dec  3 01:19:13 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:13 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:14 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.jzzeoa
Dec  3 01:19:14 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.jzzeoa
Dec  3 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"} v 0) v1
Dec  3 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]: dispatch
Dec  3 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]': finished
Dec  3 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 01:19:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:14 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 0455ede0-4afe-4f3f-8d3d-e4ebe79e5b6a (Updating mgr deployment (-1 -> 1))
Dec  3 01:19:14 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 0455ede0-4afe-4f3f-8d3d-e4ebe79e5b6a (Updating mgr deployment (-1 -> 1)) in 2 seconds
Dec  3 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  3 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2fee12d3-9cf0-45ee-b9e9-9903722e8bfa does not exist
Dec  3 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:19:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:19:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:19:15 compute-0 ceph-mon[192821]: Removing key for mgr.compute-0.jzzeoa
Dec  3 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]: dispatch
Dec  3 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jzzeoa"}]': finished
Dec  3 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:19:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.330035077 +0000 UTC m=+0.081008480 container create d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.295039709 +0000 UTC m=+0.046013192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:15 compute-0 systemd[1]: Started libpod-conmon-d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85.scope.
Dec  3 01:19:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.476484696 +0000 UTC m=+0.227458139 container init d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.497851164 +0000 UTC m=+0.248824547 container start d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:19:15 compute-0 festive_boyd[202678]: 167 167
Dec  3 01:19:15 compute-0 systemd[1]: libpod-d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85.scope: Deactivated successfully.
Dec  3 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.503270483 +0000 UTC m=+0.254243946 container attach d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec  3 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.507925203 +0000 UTC m=+0.258898606 container died d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-aec8b2e2aced7e8d643cdc28b6a581334dbafe603d70b665efb6bafa9c25b88c-merged.mount: Deactivated successfully.
Dec  3 01:19:15 compute-0 podman[202675]: 2025-12-03 01:19:15.564790123 +0000 UTC m=+0.153990234 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:19:15 compute-0 podman[202662]: 2025-12-03 01:19:15.57947859 +0000 UTC m=+0.330451973 container remove d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:19:15 compute-0 systemd[1]: libpod-conmon-d10a3277facf73c54a637e1e27fc906407fb90c689aeb6afb09fdca3e62bbb85.scope: Deactivated successfully.
Dec  3 01:19:15 compute-0 podman[202721]: 2025-12-03 01:19:15.807926083 +0000 UTC m=+0.079472671 container create 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:15 compute-0 podman[202721]: 2025-12-03 01:19:15.775231624 +0000 UTC m=+0.046778262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:15 compute-0 systemd[1]: Started libpod-conmon-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope.
Dec  3 01:19:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:15 compute-0 podman[202721]: 2025-12-03 01:19:15.98349634 +0000 UTC m=+0.255042928 container init 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:19:16 compute-0 podman[202721]: 2025-12-03 01:19:16.025116607 +0000 UTC m=+0.296663195 container start 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:16 compute-0 podman[202721]: 2025-12-03 01:19:16.031918552 +0000 UTC m=+0.303465180 container attach 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 01:19:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:17 compute-0 gracious_visvesvaraya[202735]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:19:17 compute-0 gracious_visvesvaraya[202735]: --> relative data size: 1.0
Dec  3 01:19:17 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 01:19:17 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 551e0f4a-0b7e-47cf-9522-b82f94d4038c
Dec  3 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"} v 0) v1
Dec  3 01:19:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]: dispatch
Dec  3 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  3 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]': finished
Dec  3 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  3 01:19:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  3 01:19:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:17 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 01:19:18 compute-0 lvm[202799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 01:19:18 compute-0 lvm[202799]: VG ceph_vg0 finished
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec  3 01:19:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]: dispatch
Dec  3 01:19:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1603431928' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c"}]': finished
Dec  3 01:19:18 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 3 completed events
Dec  3 01:19:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 01:19:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  3 01:19:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760755779' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: stderr: got monmap epoch 1
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: --> Creating keyring file for osd.0
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec  3 01:19:18 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 551e0f4a-0b7e-47cf-9522-b82f94d4038c --setuser ceph --setgroup ceph
Dec  3 01:19:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  3 01:19:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 01:19:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:19 compute-0 ceph-mon[192821]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  3 01:19:19 compute-0 ceph-mon[192821]: Cluster is now healthy
Dec  3 01:19:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:18.730+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:18.731+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:18.731+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:18.731+0000 7f0d10b99740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 01:19:21 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 38b78a6e-cf5e-4c74-a51c-1bb51cf53a18
Dec  3 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"} v 0) v1
Dec  3 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]: dispatch
Dec  3 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  3 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]': finished
Dec  3 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  3 01:19:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  3 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:22 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:22 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 01:19:22 compute-0 lvm[203763]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 01:19:22 compute-0 lvm[203763]: VG ceph_vg1 finished
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  3 01:19:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]: dispatch
Dec  3 01:19:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4206733841' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18"}]': finished
Dec  3 01:19:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  3 01:19:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511928018' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: stderr: got monmap epoch 1
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: --> Creating keyring file for osd.1
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  3 01:19:22 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 38b78a6e-cf5e-4c74-a51c-1bb51cf53a18 --setuser ceph --setgroup ceph
Dec  3 01:19:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:22.979+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:22.979+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:22.980+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:22.980+0000 7f47cfa35740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 01:19:25 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 01:19:26 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  3 01:19:26 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec  3 01:19:26 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 01:19:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:26 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2ebf7eac-7883-4286-84a2-653e10a1ae8a
Dec  3 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"} v 0) v1
Dec  3 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]: dispatch
Dec  3 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  3 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]': finished
Dec  3 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec  3 01:19:26 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec  3 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:26 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:26 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:26 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 01:19:27 compute-0 lvm[204729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 01:19:27 compute-0 lvm[204729]: VG ceph_vg2 finished
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec  3 01:19:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]: dispatch
Dec  3 01:19:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4021449647' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a"}]': finished
Dec  3 01:19:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  3 01:19:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2008043112' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: stderr: got monmap epoch 1
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: --> Creating keyring file for osd.2
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec  3 01:19:27 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 2ebf7eac-7883-4286-84a2-653e10a1ae8a --setuser ceph --setgroup ceph
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:19:28
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] No pools available
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:19:29 compute-0 podman[158098]: time="2025-12-03T01:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25446 "" "Go-http-client/1.1"
Dec  3 01:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Dec  3 01:19:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:27.757+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:27.758+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:27.758+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: stderr: 2025-12-03T01:19:27.759+0000 7f119f54f740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  3 01:19:30 compute-0 gracious_visvesvaraya[202735]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec  3 01:19:30 compute-0 systemd[1]: libpod-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope: Deactivated successfully.
Dec  3 01:19:30 compute-0 systemd[1]: libpod-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope: Consumed 8.735s CPU time.
Dec  3 01:19:30 compute-0 podman[205665]: 2025-12-03 01:19:30.707630269 +0000 UTC m=+0.064378353 container died 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:19:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3da81a669bfd7c30bed0b92971e8da86ca7ccb1302aba715fcf4e7bee9a96c6-merged.mount: Deactivated successfully.
Dec  3 01:19:30 compute-0 podman[205665]: 2025-12-03 01:19:30.831874938 +0000 UTC m=+0.188622982 container remove 8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_visvesvaraya, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:30 compute-0 systemd[1]: libpod-conmon-8d0edba0eb0122e7f2ecc4e1b5ae3ac4e5cd44088009edbe0ed1c0056b775f59.scope: Deactivated successfully.
Dec  3 01:19:31 compute-0 podman[205705]: 2025-12-03 01:19:31.146468852 +0000 UTC m=+0.091244843 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:19:31 compute-0 podman[205707]: 2025-12-03 01:19:31.186229782 +0000 UTC m=+0.112925879 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:19:31 compute-0 podman[205706]: 2025-12-03 01:19:31.192681458 +0000 UTC m=+0.127710589 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 01:19:31 compute-0 podman[205708]: 2025-12-03 01:19:31.201790752 +0000 UTC m=+0.133926309 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:19:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:19:31 compute-0 openstack_network_exporter[160250]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:19:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:19:31 compute-0 podman[205899]: 2025-12-03 01:19:31.957369394 +0000 UTC m=+0.087741033 container create 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:31.918794004 +0000 UTC m=+0.049165723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:32 compute-0 systemd[1]: Started libpod-conmon-027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0.scope.
Dec  3 01:19:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.127320836 +0000 UTC m=+0.257692485 container init 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.14538846 +0000 UTC m=+0.275760109 container start 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.152303347 +0000 UTC m=+0.282675046 container attach 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:32 compute-0 eager_varahamihira[205915]: 167 167
Dec  3 01:19:32 compute-0 systemd[1]: libpod-027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0.scope: Deactivated successfully.
Dec  3 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.160386065 +0000 UTC m=+0.290757744 container died 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1a8ef57ab1f911a209a4330e036c180e891ec01bad672598cec5c8fac3e18ec-merged.mount: Deactivated successfully.
Dec  3 01:19:32 compute-0 podman[205899]: 2025-12-03 01:19:32.238826658 +0000 UTC m=+0.369198287 container remove 027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:32 compute-0 systemd[1]: libpod-conmon-027bed38fe655409f6725866e1cf43f00c49407cffefa69c72a68fdd712128e0.scope: Deactivated successfully.
Dec  3 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.493056613 +0000 UTC m=+0.087599149 container create 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.46062104 +0000 UTC m=+0.055163596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:32 compute-0 systemd[1]: Started libpod-conmon-50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63.scope.
Dec  3 01:19:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.639706027 +0000 UTC m=+0.234248603 container init 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.659343721 +0000 UTC m=+0.253886257 container start 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:32 compute-0 podman[205938]: 2025-12-03 01:19:32.666569406 +0000 UTC m=+0.261111942 container attach 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:19:33 compute-0 hungry_bassi[205954]: {
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:    "0": [
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:        {
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "devices": [
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "/dev/loop3"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            ],
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_name": "ceph_lv0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_size": "21470642176",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "name": "ceph_lv0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "tags": {
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cluster_name": "ceph",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.crush_device_class": "",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.encrypted": "0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osd_id": "0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.type": "block",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.vdo": "0"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            },
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "type": "block",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "vg_name": "ceph_vg0"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:        }
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:    ],
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:    "1": [
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:        {
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "devices": [
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "/dev/loop4"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            ],
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_name": "ceph_lv1",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_size": "21470642176",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "name": "ceph_lv1",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "tags": {
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cluster_name": "ceph",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.crush_device_class": "",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.encrypted": "0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osd_id": "1",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.type": "block",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.vdo": "0"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            },
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "type": "block",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "vg_name": "ceph_vg1"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:        }
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:    ],
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:    "2": [
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:        {
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "devices": [
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "/dev/loop5"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            ],
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_name": "ceph_lv2",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_size": "21470642176",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "name": "ceph_lv2",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "tags": {
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.cluster_name": "ceph",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.crush_device_class": "",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.encrypted": "0",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osd_id": "2",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.type": "block",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:                "ceph.vdo": "0"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            },
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "type": "block",
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:            "vg_name": "ceph_vg2"
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:        }
Dec  3 01:19:33 compute-0 hungry_bassi[205954]:    ]
Dec  3 01:19:33 compute-0 hungry_bassi[205954]: }
Dec  3 01:19:33 compute-0 systemd[1]: libpod-50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63.scope: Deactivated successfully.
Dec  3 01:19:33 compute-0 podman[205938]: 2025-12-03 01:19:33.508039193 +0000 UTC m=+1.102581719 container died 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bd43b40e195cfd1a39d4afe2514a50bb17027b8ca9f7c29f47f6d6d9dec7d6d-merged.mount: Deactivated successfully.
Dec  3 01:19:33 compute-0 podman[205938]: 2025-12-03 01:19:33.614841314 +0000 UTC m=+1.209383860 container remove 50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:19:33 compute-0 systemd[1]: libpod-conmon-50583ba70ca14297d286f66988b6d32e4d829e97705433bde6b442bc20dc5a63.scope: Deactivated successfully.
Dec  3 01:19:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec  3 01:19:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  3 01:19:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:19:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:19:33 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec  3 01:19:33 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec  3 01:19:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  3 01:19:34 compute-0 ceph-mon[192821]: Deploying daemon osd.0 on compute-0
Dec  3 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.806227151 +0000 UTC m=+0.079564893 container create b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.769452737 +0000 UTC m=+0.042790549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:34 compute-0 systemd[1]: Started libpod-conmon-b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c.scope.
Dec  3 01:19:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.956079827 +0000 UTC m=+0.229417629 container init b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.97412526 +0000 UTC m=+0.247463012 container start b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.981238682 +0000 UTC m=+0.254576494 container attach b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:19:34 compute-0 sharp_lovelace[206131]: 167 167
Dec  3 01:19:34 compute-0 systemd[1]: libpod-b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c.scope: Deactivated successfully.
Dec  3 01:19:34 compute-0 podman[206114]: 2025-12-03 01:19:34.989378711 +0000 UTC m=+0.262716463 container died b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0716357fe3369ba6aeff026075cd50bb3d13e11db36edeb69bf57701511f0866-merged.mount: Deactivated successfully.
Dec  3 01:19:35 compute-0 podman[206114]: 2025-12-03 01:19:35.05829511 +0000 UTC m=+0.331632842 container remove b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:35 compute-0 systemd[1]: libpod-conmon-b6c31ac039ae97a5726087f849c3864d78c199621e5965919209300ae0840b4c.scope: Deactivated successfully.
Dec  3 01:19:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.510192618 +0000 UTC m=+0.094496706 container create 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.478171306 +0000 UTC m=+0.062475444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:35 compute-0 systemd[1]: Started libpod-conmon-35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c.scope.
Dec  3 01:19:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.669119367 +0000 UTC m=+0.253423475 container init 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.703811568 +0000 UTC m=+0.288115646 container start 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:35 compute-0 podman[206162]: 2025-12-03 01:19:35.709336529 +0000 UTC m=+0.293640607 container attach 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:19:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:36 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test[206179]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  3 01:19:36 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test[206179]:                            [--no-systemd] [--no-tmpfs]
Dec  3 01:19:36 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test[206179]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 01:19:36 compute-0 systemd[1]: libpod-35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c.scope: Deactivated successfully.
Dec  3 01:19:36 compute-0 podman[206162]: 2025-12-03 01:19:36.36081462 +0000 UTC m=+0.945118708 container died 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-308e7236468aa12be49f0959cd8a8d5c6afb61f49af0c80f7fdf1832af26970c-merged.mount: Deactivated successfully.
Dec  3 01:19:36 compute-0 podman[206162]: 2025-12-03 01:19:36.470081914 +0000 UTC m=+1.054385972 container remove 35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:19:36 compute-0 systemd[1]: libpod-conmon-35e74423b4d9663dec3acd69edf213d647dce6b30d351af8823c0e421583bc9c.scope: Deactivated successfully.
Dec  3 01:19:36 compute-0 podman[206186]: 2025-12-03 01:19:36.558944345 +0000 UTC m=+0.151525140 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  3 01:19:36 compute-0 systemd[1]: Reloading.
Dec  3 01:19:36 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:36 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:37 compute-0 systemd[1]: Reloading.
Dec  3 01:19:37 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:37 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:37 compute-0 systemd[1]: Starting Ceph osd.0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:19:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.232359103 +0000 UTC m=+0.093602133 container create 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.19872933 +0000 UTC m=+0.059972390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.383506152 +0000 UTC m=+0.244749242 container init 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.399876273 +0000 UTC m=+0.261119313 container start 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:38 compute-0 podman[206356]: 2025-12-03 01:19:38.406738589 +0000 UTC m=+0.267981629 container attach 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  3 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 01:19:39 compute-0 bash[206356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 01:19:39 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate[206371]: --> ceph-volume raw activate successful for osd ID: 0
Dec  3 01:19:39 compute-0 bash[206356]: --> ceph-volume raw activate successful for osd ID: 0
Dec  3 01:19:39 compute-0 python3[206462]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:19:39 compute-0 systemd[1]: libpod-4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8.scope: Deactivated successfully.
Dec  3 01:19:39 compute-0 podman[206356]: 2025-12-03 01:19:39.858200751 +0000 UTC m=+1.719443791 container died 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:19:39 compute-0 systemd[1]: libpod-4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8.scope: Consumed 1.478s CPU time.
Dec  3 01:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-840de21aa4bf583e81a78a51fb9cc67f9610fbdd4427370f1e1fbf599f85afbe-merged.mount: Deactivated successfully.
Dec  3 01:19:39 compute-0 podman[206356]: 2025-12-03 01:19:39.955674713 +0000 UTC m=+1.816917713 container remove 4d6db52bbeb627e4cf8d041d8aba80a7e6559a9d341a02b1666d557daf8333a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:39 compute-0 podman[206527]: 2025-12-03 01:19:39.974758793 +0000 UTC m=+0.101611159 container create 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:39.940407331 +0000 UTC m=+0.067259787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:19:40 compute-0 systemd[1]: Started libpod-conmon-6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e.scope.
Dec  3 01:19:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:40.11257382 +0000 UTC m=+0.239426226 container init 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:40.133262751 +0000 UTC m=+0.260115157 container start 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:40 compute-0 podman[206527]: 2025-12-03 01:19:40.14022806 +0000 UTC m=+0.267080516 container attach 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:19:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.367224026 +0000 UTC m=+0.077473300 container create 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.331242852 +0000 UTC m=+0.041492186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c583ea3a03bd3818cd12cab5471c6b4c0e0e18a215878a4bb19751b24a0d6d9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.543830178 +0000 UTC m=+0.254079452 container init 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:40 compute-0 podman[206596]: 2025-12-03 01:19:40.564641952 +0000 UTC m=+0.274891196 container start 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:40 compute-0 bash[206596]: 42c5471d35c5fdc17001e59ed959fef762fb1fec0ac41750cae402122a3b0431
Dec  3 01:19:40 compute-0 systemd[1]: Started Ceph osd.0 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:40 compute-0 ceph-osd[206633]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:19:40 compute-0 ceph-osd[206633]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  3 01:19:40 compute-0 ceph-osd[206633]: pidfile_write: ignore empty --pid-file
Dec  3 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd958a5800 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec  3 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  3 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:19:40 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  3 01:19:40 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  3 01:19:40 compute-0 podman[206634]: 2025-12-03 01:19:40.738881724 +0000 UTC m=+0.109222984 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec  3 01:19:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 01:19:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785632389' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 01:19:40 compute-0 bold_bassi[206566]: 
Dec  3 01:19:40 compute-0 bold_bassi[206566]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":120,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T01:19:30.172459+0000","services":{}},"progress_events":{}}
Dec  3 01:19:40 compute-0 systemd[1]: libpod-6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e.scope: Deactivated successfully.
Dec  3 01:19:40 compute-0 ceph-osd[206633]: bdev(0x55cd94a6d800 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 01:19:40 compute-0 podman[206710]: 2025-12-03 01:19:40.948214367 +0000 UTC m=+0.076752661 container died 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.967 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.968 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.968 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.973 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-594df817889bdd0486fea0909b9de7e62a58ed726d71564289b540373d9af110-merged.mount: Deactivated successfully.
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.995 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.997 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:19:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:19:41.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:19:41 compute-0 podman[206710]: 2025-12-03 01:19:41.019662721 +0000 UTC m=+0.148200975 container remove 6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e (image=quay.io/ceph/ceph:v18, name=bold_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:19:41 compute-0 systemd[1]: libpod-conmon-6020181d75c602eb3191eae4ec4934eb5dfd87ce6234ff551e8822a35bd3380e.scope: Deactivated successfully.
Dec  3 01:19:41 compute-0 ceph-osd[206633]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec  3 01:19:41 compute-0 ceph-osd[206633]: load: jerasure load: lrc 
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.623917708 +0000 UTC m=+0.098562000 container create 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:19:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  3 01:19:41 compute-0 ceph-mon[192821]: Deploying daemon osd.1 on compute-0
Dec  3 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.589097505 +0000 UTC m=+0.063741847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:41 compute-0 systemd[1]: Started libpod-conmon-34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31.scope.
Dec  3 01:19:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.764853045 +0000 UTC m=+0.239497387 container init 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.780953269 +0000 UTC m=+0.255597561 container start 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.790918004 +0000 UTC m=+0.265562306 container attach 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 01:19:41 compute-0 magical_hodgkin[206846]: 167 167
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:41 compute-0 systemd[1]: libpod-34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31.scope: Deactivated successfully.
Dec  3 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.798099189 +0000 UTC m=+0.272743491 container died 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95926c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluefs mount
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluefs mount shared_bdev_used = 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Git sha 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: DB SUMMARY
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: DB Session ID:  CYHBGYLFJSJZ0MXF1HD1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                     Options.env: 0x55cd958f7d50
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                Options.info_log: 0x55cd94af47e0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.write_buffer_manager: 0x55cd959fc460
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.row_cache: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                              Options.wal_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.wal_compression: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_jobs: 4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Compression algorithms supported:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kZSTD supported: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8346a1d68eed95b957b0eace13869e26620226129516a7e68c68bf6934eaa13-merged.mount: Deactivated successfully.
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:41 compute-0 podman[206825]: 2025-12-03 01:19:41.882858224 +0000 UTC m=+0.357502496 container remove 34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fb464fcb-4fed-4245-84a5-bd0adda5e152
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724781882014, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724781882446, "job": 1, "event": "recovery_finished"}
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec  3 01:19:41 compute-0 ceph-osd[206633]: freelist init
Dec  3 01:19:41 compute-0 ceph-osd[206633]: freelist _read_cfg
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 01:19:41 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bluefs umount
Dec  3 01:19:41 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 01:19:41 compute-0 systemd[1]: libpod-conmon-34b3dedb8c6721a419de51c067d40decf1f86468e452033664ad5d1baf006d31.scope: Deactivated successfully.
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bdev(0x55cd95927400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluefs mount
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluefs mount shared_bdev_used = 4718592
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Git sha 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DB SUMMARY
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DB Session ID:  CYHBGYLFJSJZ0MXF1HD0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                     Options.env: 0x55cd95a8c230
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                Options.info_log: 0x55cd94af4540
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.write_buffer_manager: 0x55cd959fc460
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.row_cache: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                              Options.wal_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.wal_compression: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_jobs: 4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Compression algorithms supported:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kZSTD supported: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd94af4300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd94ae1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fb464fcb-4fed-4245-84a5-bd0adda5e152
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782153478, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782159232, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724782, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb464fcb-4fed-4245-84a5-bd0adda5e152", "db_session_id": "CYHBGYLFJSJZ0MXF1HD0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782164294, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724782, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb464fcb-4fed-4245-84a5-bd0adda5e152", "db_session_id": "CYHBGYLFJSJZ0MXF1HD0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782168746, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724782, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fb464fcb-4fed-4245-84a5-bd0adda5e152", "db_session_id": "CYHBGYLFJSJZ0MXF1HD0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724782171037, "job": 1, "event": "recovery_finished"}
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 01:19:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cd94c4e000
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: DB pointer 0x55cd959e1a00
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec  3 01:19:42 compute-0 ceph-osd[206633]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  3 01:19:42 compute-0 ceph-osd[206633]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 01:19:42 compute-0 ceph-osd[206633]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 01:19:42 compute-0 ceph-osd[206633]: _get_class not permitted to load lua
Dec  3 01:19:42 compute-0 ceph-osd[206633]: _get_class not permitted to load sdk
Dec  3 01:19:42 compute-0 ceph-osd[206633]: _get_class not permitted to load test_remote_reads
Dec  3 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 load_pgs
Dec  3 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 load_pgs opened 0 pgs
Dec  3 01:19:42 compute-0 ceph-osd[206633]: osd.0 0 log_to_monitors true
Dec  3 01:19:42 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0[206610]: 2025-12-03T01:19:42.210+0000 7f2a29dc9740 -1 osd.0 0 log_to_monitors true
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Dec  3 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  3 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.27662713 +0000 UTC m=+0.079790569 container create cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.24232513 +0000 UTC m=+0.045488629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:42 compute-0 systemd[1]: Started libpod-conmon-cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5.scope.
Dec  3 01:19:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.437874609 +0000 UTC m=+0.241038088 container init cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.472518508 +0000 UTC m=+0.275681937 container start cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:42 compute-0 podman[207252]: 2025-12-03 01:19:42.479261061 +0000 UTC m=+0.282424550 container attach cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:42 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  3 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec  3 01:19:42 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  3 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:42 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:42 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:42 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:43 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test[207299]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  3 01:19:43 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test[207299]:                            [--no-systemd] [--no-tmpfs]
Dec  3 01:19:43 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test[207299]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 01:19:43 compute-0 systemd[1]: libpod-cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5.scope: Deactivated successfully.
Dec  3 01:19:43 compute-0 podman[207252]: 2025-12-03 01:19:43.209897933 +0000 UTC m=+1.013061372 container died cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:19:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 01:19:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 01:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7ce26ded24f94315fc65706dd26fb152415f20b3f2e0d5af70702455935b97-merged.mount: Deactivated successfully.
Dec  3 01:19:43 compute-0 podman[207252]: 2025-12-03 01:19:43.309038558 +0000 UTC m=+1.112201967 container remove cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:19:43 compute-0 systemd[1]: libpod-conmon-cd98723d7cb36e5ce4b04456cafc5ef010729fda491b5539e63155561775c0f5.scope: Deactivated successfully.
Dec  3 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  3 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec  3 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 done with init, starting boot process
Dec  3 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 start_boot
Dec  3 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 01:19:43 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 01:19:43 compute-0 ceph-osd[206633]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec  3 01:19:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec  3 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:43 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec  3 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:43 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  3 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:43 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 01:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:43 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:43 compute-0 systemd[1]: Reloading.
Dec  3 01:19:43 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:43 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:44 compute-0 systemd[1]: Reloading.
Dec  3 01:19:44 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:44 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:44 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec  3 01:19:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:44 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:44 compute-0 systemd[1]: Starting Ceph osd.1 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:19:44 compute-0 ceph-mon[192821]: from='osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.154898512 +0000 UTC m=+0.101623159 container create 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.109020174 +0000 UTC m=+0.055744881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.308028172 +0000 UTC m=+0.254752879 container init 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.328711923 +0000 UTC m=+0.275436570 container start 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:19:45 compute-0 podman[207454]: 2025-12-03 01:19:45.353790376 +0000 UTC m=+0.300515023 container attach 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:19:45 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec  3 01:19:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:45 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:45 compute-0 podman[207474]: 2025-12-03 01:19:45.840163499 +0000 UTC m=+0.106226387 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:19:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  3 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 01:19:46 compute-0 bash[207454]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 01:19:46 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate[207468]: --> ceph-volume raw activate successful for osd ID: 1
Dec  3 01:19:46 compute-0 bash[207454]: --> ceph-volume raw activate successful for osd ID: 1
Dec  3 01:19:46 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec  3 01:19:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:46 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:46 compute-0 systemd[1]: libpod-3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b.scope: Deactivated successfully.
Dec  3 01:19:46 compute-0 podman[207454]: 2025-12-03 01:19:46.726515028 +0000 UTC m=+1.673239675 container died 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 01:19:46 compute-0 systemd[1]: libpod-3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b.scope: Consumed 1.423s CPU time.
Dec  3 01:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-acaf7e0b5779b401043a565a2616685d945fcf859e4865a0c0c1b11ba53a8b04-merged.mount: Deactivated successfully.
Dec  3 01:19:46 compute-0 podman[207454]: 2025-12-03 01:19:46.874486536 +0000 UTC m=+1.821211153 container remove 3e77ce91418a2e6ca52034631f4444ebb47f9e46aa02e68af12e153d1beb976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.336305939 +0000 UTC m=+0.117342923 container create a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.290519074 +0000 UTC m=+0.071556108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8aa97d9c21534306e2ce4d3abcd1ff4de31d567e8c3a6cd8b7d05e9985db1d7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.507949794 +0000 UTC m=+0.288986788 container init a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:47 compute-0 podman[207686]: 2025-12-03 01:19:47.525500644 +0000 UTC m=+0.306537628 container start a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:47 compute-0 bash[207686]: a464c63d7c3230f0b989d23ed0fa3df0dde5b72cc239e46ea2f70efbee749d3a
Dec  3 01:19:47 compute-0 systemd[1]: Started Ceph osd.1 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:19:47 compute-0 ceph-osd[207705]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:19:47 compute-0 ceph-osd[207705]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  3 01:19:47 compute-0 ceph-osd[207705]: pidfile_write: ignore empty --pid-file
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4b21800 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a3ce9800 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Dec  3 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  3 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:19:47 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec  3 01:19:47 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec  3 01:19:47 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec  3 01:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:47 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  3 01:19:47 compute-0 ceph-osd[207705]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  3 01:19:47 compute-0 ceph-osd[207705]: load: jerasure load: lrc 
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:47 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 01:19:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 17.295 iops: 4427.503 elapsed_sec: 0.678
Dec  3 01:19:48 compute-0 ceph-osd[207705]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 01:19:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [WRN] : OSD bench result of 4427.503498 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 0 waiting for initial osdmap
Dec  3 01:19:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0[206610]: 2025-12-03T01:19:48.215+0000 7f2a25d49640 -1 osd.0 0 waiting for initial osdmap
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba2c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs mount
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs mount shared_bdev_used = 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Git sha 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB SUMMARY
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB Session ID:  KYPQACPV34ZGSGC2ZUZA
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                     Options.env: 0x55f0a4b73d50
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                Options.info_log: 0x55f0a3d707e0
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 set_numa_affinity not setting numa affinity
Dec  3 01:19:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-0[206610]: 2025-12-03T01:19:48.250+0000 7f2a21371640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.write_buffer_manager: 0x55f0a4c82460
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.row_cache: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.wal_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.wal_compression: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_jobs: 4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compression algorithms supported:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kZSTD supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 62d22037-8fb2-4da9-b46c-8157fa97fa34
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788351197, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788351501, "job": 1, "event": "recovery_finished"}
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: freelist init
Dec  3 01:19:48 compute-0 ceph-osd[207705]: freelist _read_cfg
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs umount
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bdev(0x55f0a4ba3400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs mount
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluefs mount shared_bdev_used = 4718592
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Git sha 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB SUMMARY
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB Session ID:  KYPQACPV34ZGSGC2ZUZB
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                     Options.env: 0x55f0a4d12230
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                Options.info_log: 0x55f0a3d70540
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.write_buffer_manager: 0x55f0a4c826e0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.row_cache: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                              Options.wal_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.wal_compression: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_jobs: 4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Compression algorithms supported:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kZSTD supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d70980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d43f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d43f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0a3d43f00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f0a3d5d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 62d22037-8fb2-4da9-b46c-8157fa97fa34
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788569637, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788578430, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724788, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62d22037-8fb2-4da9-b46c-8157fa97fa34", "db_session_id": "KYPQACPV34ZGSGC2ZUZB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788586798, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724788, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62d22037-8fb2-4da9-b46c-8157fa97fa34", "db_session_id": "KYPQACPV34ZGSGC2ZUZB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788592647, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724788, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "62d22037-8fb2-4da9-b46c-8157fa97fa34", "db_session_id": "KYPQACPV34ZGSGC2ZUZB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724788595749, "job": 1, "event": "recovery_finished"}
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.622403746 +0000 UTC m=+0.055366632 container create 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f0a4d3bc00
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: DB pointer 0x55f0a4c5da00
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  3 01:19:48 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  3 01:19:48 compute-0 ceph-osd[207705]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 01:19:48 compute-0 ceph-osd[207705]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 01:19:48 compute-0 ceph-osd[207705]: _get_class not permitted to load lua
Dec  3 01:19:48 compute-0 ceph-osd[207705]: _get_class not permitted to load sdk
Dec  3 01:19:48 compute-0 ceph-osd[207705]: _get_class not permitted to load test_remote_reads
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 load_pgs
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 load_pgs opened 0 pgs
Dec  3 01:19:48 compute-0 ceph-osd[207705]: osd.1 0 log_to_monitors true
Dec  3 01:19:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1[207701]: 2025-12-03T01:19:48.645+0000 7fb70d768740 -1 osd.1 0 log_to_monitors true
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  3 01:19:48 compute-0 systemd[1]: Started libpod-conmon-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope.
Dec  3 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.598830401 +0000 UTC m=+0.031793287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:48 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2166370730; not ready for session (expect reconnect)
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:48 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.715611948 +0000 UTC m=+0.148574824 container init 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.72349298 +0000 UTC m=+0.156455836 container start 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.727434072 +0000 UTC m=+0.160396938 container attach 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:19:48 compute-0 festive_carver[208293]: 167 167
Dec  3 01:19:48 compute-0 systemd[1]: libpod-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope: Deactivated successfully.
Dec  3 01:19:48 compute-0 conmon[208293]: conmon 3ff0cc9cfa6c1234c3e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope/container/memory.events
Dec  3 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.734596185 +0000 UTC m=+0.167559061 container died 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-683a05fa03e3c1f808064764e514ea91c20a591a85985a0103a66ba0470360e6-merged.mount: Deactivated successfully.
Dec  3 01:19:48 compute-0 podman[208163]: 2025-12-03 01:19:48.787666248 +0000 UTC m=+0.220629124 container remove 3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 01:19:48 compute-0 systemd[1]: libpod-conmon-3ff0cc9cfa6c1234c3e1ce43a2c703a0cf945b614184e0e135395ebd8927e2a8.scope: Deactivated successfully.
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec  3 01:19:48 compute-0 ceph-mon[192821]: Deploying daemon osd.2 on compute-0
Dec  3 01:19:48 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  3 01:19:48 compute-0 ceph-osd[206633]: osd.0 9 state: booting -> active
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730] boot
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:48 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:48 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.154755459 +0000 UTC m=+0.085648289 container create 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.123991419 +0000 UTC m=+0.054884279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:49 compute-0 systemd[1]: Started libpod-conmon-82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571.scope.
Dec  3 01:19:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.295742457 +0000 UTC m=+0.226635357 container init 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.308271369 +0000 UTC m=+0.239164179 container start 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.31338341 +0000 UTC m=+0.244276250 container attach 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:19:49 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 01:19:49 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  3 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec  3 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 done with init, starting boot process
Dec  3 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 start_boot
Dec  3 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 01:19:49 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 01:19:49 compute-0 ceph-osd[207705]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  3 01:19:49 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec  3 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:49 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:49 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:49 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec  3 01:19:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:49 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:49 compute-0 ceph-mon[192821]: OSD bench result of 4427.503498 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 01:19:49 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  3 01:19:49 compute-0 ceph-mon[192821]: osd.0 [v2:192.168.122.100:6802/2166370730,v1:192.168.122.100:6803/2166370730] boot
Dec  3 01:19:49 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 01:19:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test[208340]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  3 01:19:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test[208340]:                            [--no-systemd] [--no-tmpfs]
Dec  3 01:19:49 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test[208340]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 01:19:49 compute-0 systemd[1]: libpod-82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571.scope: Deactivated successfully.
Dec  3 01:19:49 compute-0 podman[208325]: 2025-12-03 01:19:49.987901972 +0000 UTC m=+0.918794862 container died 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf419895a72ba26fc5308459b72be7c7eb4e80c0749ac27775066a3333ddc344-merged.mount: Deactivated successfully.
Dec  3 01:19:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 01:19:50 compute-0 podman[208325]: 2025-12-03 01:19:50.201800472 +0000 UTC m=+1.132693312 container remove 82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate-test, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 01:19:50 compute-0 systemd[1]: libpod-conmon-82137c4387f9fb84546ebab24f9bbb008562805244da507c393f617742896571.scope: Deactivated successfully.
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:50 compute-0 ceph-mgr[193109]: [devicehealth INFO root] creating mgr pool
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Dec  3 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  3 01:19:50 compute-0 systemd[1]: Reloading.
Dec  3 01:19:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:50 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:50 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec  3 01:19:50 compute-0 ceph-mon[192821]: from='osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 01:19:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  3 01:19:50 compute-0 ceph-osd[206633]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 01:19:50 compute-0 ceph-osd[206633]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  3 01:19:50 compute-0 ceph-osd[206633]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 01:19:50 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:50 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:50 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Dec  3 01:19:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  3 01:19:51 compute-0 systemd[1]: Reloading.
Dec  3 01:19:51 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:19:51 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:19:51 compute-0 systemd[1]: Starting Ceph osd.2 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:19:51 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec  3 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:51 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  3 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  3 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec  3 01:19:51 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec  3 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  3 01:19:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  3 01:19:51 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:51 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.135395718 +0000 UTC m=+0.115335791 container create c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.082338576 +0000 UTC m=+0.062278699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 01:19:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.350811487 +0000 UTC m=+0.330751530 container init c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.367221698 +0000 UTC m=+0.347161741 container start c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:19:52 compute-0 podman[208502]: 2025-12-03 01:19:52.387396696 +0000 UTC m=+0.367336769 container attach c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:19:52 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec  3 01:19:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:52 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  3 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  3 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 01:19:53 compute-0 bash[208502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 01:19:53 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate[208517]: --> ceph-volume raw activate successful for osd ID: 2
Dec  3 01:19:53 compute-0 bash[208502]: --> ceph-volume raw activate successful for osd ID: 2
Dec  3 01:19:53 compute-0 systemd[1]: libpod-c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467.scope: Deactivated successfully.
Dec  3 01:19:53 compute-0 podman[208502]: 2025-12-03 01:19:53.711214372 +0000 UTC m=+1.691154435 container died c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:19:53 compute-0 systemd[1]: libpod-c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467.scope: Consumed 1.356s CPU time.
Dec  3 01:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fa6dceb8aa82b7221f66ade5ac561b52b495564f11ea5171f56c2a155ec5ab8-merged.mount: Deactivated successfully.
Dec  3 01:19:53 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec  3 01:19:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:53 compute-0 podman[208502]: 2025-12-03 01:19:53.858225836 +0000 UTC m=+1.838165879 container remove c02c4dc6a168c25719173e8b3f98f35beed4f2455e0107d273436be151548467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:19:53 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.205722185 +0000 UTC m=+0.076406342 container create 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.173403805 +0000 UTC m=+0.044087972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99fb35f9df473732875a1a99e6ba123e335cca124d0153a1f0bf001e0eb1f973/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.327152772 +0000 UTC m=+0.197836929 container init 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:19:54 compute-0 podman[208712]: 2025-12-03 01:19:54.341485179 +0000 UTC m=+0.212169336 container start 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:19:54 compute-0 bash[208712]: 8463edd2b7dbdc905640f8f015989671b483be937ce442ff49f078714b648dcc
Dec  3 01:19:54 compute-0 systemd[1]: Started Ceph osd.2 for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:19:54 compute-0 ceph-osd[208731]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:19:54 compute-0 ceph-osd[208731]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  3 01:19:54 compute-0 ceph-osd[208731]: pidfile_write: ignore empty --pid-file
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82fd1800 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.459 iops: 4725.506 elapsed_sec: 0.635
Dec  3 01:19:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [WRN] : OSD bench result of 4725.505655 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 0 waiting for initial osdmap
Dec  3 01:19:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1[207701]: 2025-12-03T01:19:54.400+0000 7fb7096e8640 -1 osd.1 0 waiting for initial osdmap
Dec  3 01:19:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Dec  3 01:19:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 set_numa_affinity not setting numa affinity
Dec  3 01:19:54 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-1[207701]: 2025-12-03T01:19:54.436+0000 7fb704d10640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 01:19:54 compute-0 ceph-osd[207705]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b82199800 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 01:19:54 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1951846642; not ready for session (expect reconnect)
Dec  3 01:19:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:54 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 01:19:54 compute-0 ceph-osd[208731]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec  3 01:19:54 compute-0 ceph-osd[208731]: load: jerasure load: lrc 
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:54 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:19:55 compute-0 ceph-osd[207705]: osd.1 12 tick checking mon for new map
Dec  3 01:19:55 compute-0 ceph-osd[208731]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83044c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs mount
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs mount shared_bdev_used = 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.435456567 +0000 UTC m=+0.055240669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Git sha 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB SUMMARY
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB Session ID:  K9B5HJRG0MH6OVU3TJ45
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                     Options.env: 0x558b83023d50
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                Options.info_log: 0x558b82220840
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.write_buffer_manager: 0x558b83128460
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.row_cache: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.wal_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.wal_compression: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_jobs: 4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compression algorithms supported:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kZSTD supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  3 01:19:55 compute-0 ceph-mon[192821]: OSD bench result of 4725.505655 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 01:19:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.575152583 +0000 UTC m=+0.194936685 container create 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b82220260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53c81f52-1eb7-420b-9ec6-1a961223b8b4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795607048, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795607440, "job": 1, "event": "recovery_finished"}
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec  3 01:19:55 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642] boot
Dec  3 01:19:55 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec  3 01:19:55 compute-0 ceph-osd[208731]: freelist init
Dec  3 01:19:55 compute-0 ceph-osd[208731]: freelist _read_cfg
Dec  3 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  3 01:19:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  3 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs umount
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 01:19:55 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:55 compute-0 ceph-osd[207705]: osd.1 13 state: booting -> active
Dec  3 01:19:55 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:19:55 compute-0 systemd[1]: Started libpod-conmon-1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7.scope.
Dec  3 01:19:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.733142598 +0000 UTC m=+0.352926700 container init 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.750679328 +0000 UTC m=+0.370463420 container start 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.757223616 +0000 UTC m=+0.377007708 container attach 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:19:55 compute-0 hopeful_curie[209101]: 167 167
Dec  3 01:19:55 compute-0 systemd[1]: libpod-1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7.scope: Deactivated successfully.
Dec  3 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.763807775 +0000 UTC m=+0.383591867 container died 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bdev(0x558b83045400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs mount
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluefs mount shared_bdev_used = 4718592
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: RocksDB version: 7.9.2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Git sha 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB SUMMARY
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB Session ID:  K9B5HJRG0MH6OVU3TJ44
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: CURRENT file:  CURRENT
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.error_if_exists: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.create_if_missing: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                     Options.env: 0x558b831b8310
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                Options.info_log: 0x558b824e6d80
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.statistics: (nil)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.use_fsync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.db_log_dir: 
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.write_buffer_manager: 0x558b831286e0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.unordered_write: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.row_cache: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                              Options.wal_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.two_write_queues: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.wal_compression: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.atomic_flush: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_jobs: 4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_background_compactions: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_subcompactions: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.max_open_files: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Compression algorithms supported:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kZSTD supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kXpressCompression supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kZlibCompression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fb40ed2d11f24e75587ec5394c565198f26c137e3e413e091eba462e72f11a-merged.mount: Deactivated successfully.
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b821f2f40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 podman[208891]: 2025-12-03 01:19:55.834057618 +0000 UTC m=+0.453841690 container remove 1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b822168c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b822168c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:           Options.merge_operator: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558b822168c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558b8220cf30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.compression: LZ4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.num_levels: 7
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.bloom_locality: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                               Options.ttl: 2592000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                       Options.enable_blob_files: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                           Options.min_blob_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 01:19:55 compute-0 systemd[1]: libpod-conmon-1fbfda5db0d4e493e74cdef0f7432049672a2f18e2538ba7273f8abf920899c7.scope: Deactivated successfully.
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53c81f52-1eb7-420b-9ec6-1a961223b8b4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795874809, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795881263, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724795, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c81f52-1eb7-420b-9ec6-1a961223b8b4", "db_session_id": "K9B5HJRG0MH6OVU3TJ44", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795888417, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724795, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c81f52-1eb7-420b-9ec6-1a961223b8b4", "db_session_id": "K9B5HJRG0MH6OVU3TJ44", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795893366, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724795, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53c81f52-1eb7-420b-9ec6-1a961223b8b4", "db_session_id": "K9B5HJRG0MH6OVU3TJ44", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764724795895716, "job": 1, "event": "recovery_finished"}
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558b82255c00
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: DB pointer 0x558b83103a00
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec  3 01:19:55 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  3 01:19:55 compute-0 ceph-osd[208731]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 01:19:55 compute-0 ceph-osd[208731]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 01:19:55 compute-0 ceph-osd[208731]: _get_class not permitted to load lua
Dec  3 01:19:55 compute-0 ceph-osd[208731]: _get_class not permitted to load sdk
Dec  3 01:19:55 compute-0 ceph-osd[208731]: _get_class not permitted to load test_remote_reads
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 load_pgs
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 load_pgs opened 0 pgs
Dec  3 01:19:55 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2[208727]: 2025-12-03T01:19:55.935+0000 7fa138e91740 -1 osd.2 0 log_to_monitors true
Dec  3 01:19:55 compute-0 ceph-osd[208731]: osd.2 0 log_to_monitors true
Dec  3 01:19:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec  3 01:19:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  3 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.013428492 +0000 UTC m=+0.042548083 container create 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 01:19:56 compute-0 systemd[1]: Started libpod-conmon-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope.
Dec  3 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:55.996270241 +0000 UTC m=+0.025389852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:19:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.165855934 +0000 UTC m=+0.194975615 container init 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:19:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  3 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.198077561 +0000 UTC m=+0.227197192 container start 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:19:56 compute-0 podman[209339]: 2025-12-03 01:19:56.205816019 +0000 UTC m=+0.234935640 container attach 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  3 01:19:56 compute-0 ceph-mon[192821]: osd.1 [v2:192.168.122.100:6806/1951846642,v1:192.168.122.100:6807/1951846642] boot
Dec  3 01:19:56 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  3 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  3 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec  3 01:19:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec  3 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  3 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  3 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:56 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:56 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:19:56 compute-0 ceph-mgr[193109]: [devicehealth INFO root] creating main.db for devicehealth
Dec  3 01:19:56 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 01:19:56 compute-0 ceph-mgr[193109]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec  3 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  3 01:19:56 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 01:19:56 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  3 01:19:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  3 01:19:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]: {
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "osd_id": 2,
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "type": "bluestore"
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:    },
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "osd_id": 1,
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "type": "bluestore"
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:    },
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "osd_id": 0,
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:        "type": "bluestore"
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]:    }
Dec  3 01:19:57 compute-0 nostalgic_sanderson[209354]: }
Dec  3 01:19:57 compute-0 systemd[1]: libpod-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope: Deactivated successfully.
Dec  3 01:19:57 compute-0 podman[209339]: 2025-12-03 01:19:57.380220241 +0000 UTC m=+1.409339862 container died 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:19:57 compute-0 systemd[1]: libpod-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope: Consumed 1.177s CPU time.
Dec  3 01:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c446ae8829d37ca8829b712fc99bd40d9ca25e8d56d9f6c34f095de545481b6-merged.mount: Deactivated successfully.
Dec  3 01:19:57 compute-0 podman[209339]: 2025-12-03 01:19:57.488465219 +0000 UTC m=+1.517584830 container remove 359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:19:57 compute-0 systemd[1]: libpod-conmon-359bf3fc06f551d15ab571d14e95110f1f6ba9d00145f0dfb7142d1bb6ce04a3.scope: Deactivated successfully.
Dec  3 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  3 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec  3 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 done with init, starting boot process
Dec  3 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 start_boot
Dec  3 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 01:19:57 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 01:19:57 compute-0 ceph-osd[208731]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec  3 01:19:57 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec  3 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:57 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  3 01:19:57 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  3 01:19:57 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  3 01:19:57 compute-0 ceph-mon[192821]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  3 01:19:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:19:57 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:57 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec  3 01:19:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:57 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec  3 01:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:58 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:58 compute-0 ceph-mon[192821]: from='osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 01:19:58 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rysove(active, since 90s)
Dec  3 01:19:59 compute-0 podman[209631]: 2025-12-03 01:19:59.232636094 +0000 UTC m=+0.140244471 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 01:19:59 compute-0 podman[209631]: 2025-12-03 01:19:59.34936968 +0000 UTC m=+0.256977977 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:19:59 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec  3 01:19:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:19:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:19:59 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:19:59 compute-0 podman[158098]: time="2025-12-03T01:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29176 "" "Go-http-client/1.1"
Dec  3 01:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5806 "" "Go-http-client/1.1"
Dec  3 01:20:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  3 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:00 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec  3 01:20:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:20:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:20:00 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:20:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:20:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:20:01 compute-0 openstack_network_exporter[160250]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:20:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:20:01 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec  3 01:20:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:20:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:20:01 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:20:01 compute-0 podman[209900]: 2025-12-03 01:20:01.843149794 +0000 UTC m=+0.114643223 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec  3 01:20:01 compute-0 podman[209899]: 2025-12-03 01:20:01.864257376 +0000 UTC m=+0.138016694 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:20:01 compute-0 podman[209901]: 2025-12-03 01:20:01.880376129 +0000 UTC m=+0.150684688 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:20:01 compute-0 podman[209902]: 2025-12-03 01:20:01.889413171 +0000 UTC m=+0.167457939 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:20:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.393 iops: 4708.491 elapsed_sec: 0.637
Dec  3 01:20:02 compute-0 ceph-osd[208731]: log_channel(cluster) log [WRN] : OSD bench result of 4708.491241 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 0 waiting for initial osdmap
Dec  3 01:20:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2[208727]: 2025-12-03T01:20:02.225+0000 7fa134e11640 -1 osd.2 0 waiting for initial osdmap
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 set_numa_affinity not setting numa affinity
Dec  3 01:20:02 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-osd-2[208727]: 2025-12-03T01:20:02.262+0000 7fa130439640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 01:20:02 compute-0 ceph-osd[208731]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Dec  3 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.559897549 +0000 UTC m=+0.098070917 container create f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.52486691 +0000 UTC m=+0.063040348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:02 compute-0 systemd[1]: Started libpod-conmon-f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7.scope.
Dec  3 01:20:02 compute-0 ceph-mgr[193109]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/312088855; not ready for session (expect reconnect)
Dec  3 01:20:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:20:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:20:02 compute-0 ceph-mgr[193109]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 01:20:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.711237503 +0000 UTC m=+0.249410911 container init f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.730487147 +0000 UTC m=+0.268660485 container start f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.736145852 +0000 UTC m=+0.274319210 container attach f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:02 compute-0 reverent_lehmann[210119]: 167 167
Dec  3 01:20:02 compute-0 systemd[1]: libpod-f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7.scope: Deactivated successfully.
Dec  3 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.746616991 +0000 UTC m=+0.284790349 container died f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-018c788e321842f0d1bb78befa9e9e03a2e157fb724599a7c9a0a7bd3ed70092-merged.mount: Deactivated successfully.
Dec  3 01:20:02 compute-0 podman[210103]: 2025-12-03 01:20:02.82373393 +0000 UTC m=+0.361907268 container remove f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:20:02 compute-0 systemd[1]: libpod-conmon-f564b47326de960418e58925ff3ebc52e34cf8db48ac82c64339eda97db731a7.scope: Deactivated successfully.
Dec  3 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.112869781 +0000 UTC m=+0.095416110 container create 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.077290688 +0000 UTC m=+0.059837027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:03 compute-0 systemd[1]: Started libpod-conmon-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope.
Dec  3 01:20:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:03 compute-0 ceph-osd[208731]: osd.2 15 tick checking mon for new map
Dec  3 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  3 01:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:03 compute-0 ceph-mon[192821]: OSD bench result of 4708.491241 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 01:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec  3 01:20:03 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855] boot
Dec  3 01:20:03 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec  3 01:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  3 01:20:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  3 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.30648825 +0000 UTC m=+0.289034609 container init 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:03 compute-0 ceph-osd[208731]: osd.2 16 state: booting -> active
Dec  3 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.330666261 +0000 UTC m=+0.313212570 container start 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:20:03 compute-0 podman[210141]: 2025-12-03 01:20:03.336251294 +0000 UTC m=+0.318797643 container attach 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 01:20:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec  3 01:20:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  3 01:20:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec  3 01:20:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec  3 01:20:04 compute-0 ceph-mon[192821]: osd.2 [v2:192.168.122.100:6810/312088855,v1:192.168.122.100:6811/312088855] boot
Dec  3 01:20:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]: [
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:    {
Dec  3 01:20:05 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "available": false,
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "ceph_device": false,
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "lsm_data": {},
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "lvs": [],
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "path": "/dev/sr0",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "rejected_reasons": [
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "Insufficient space (<5GB)",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "Has a FileSystem"
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        ],
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        "sys_api": {
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "actuators": null,
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "device_nodes": "sr0",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "devname": "sr0",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "human_readable_size": "482.00 KB",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "id_bus": "ata",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "model": "QEMU DVD-ROM",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "nr_requests": "2",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "parent": "/dev/sr0",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "partitions": {},
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "path": "/dev/sr0",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "removable": "1",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "rev": "2.5+",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "ro": "0",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "rotational": "1",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "sas_address": "",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "sas_device_handle": "",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "scheduler_mode": "mq-deadline",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "sectors": 0,
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "sectorsize": "2048",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "size": 493568.0,
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "support_discard": "2048",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "type": "disk",
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:            "vendor": "QEMU"
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:        }
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]:    }
Dec  3 01:20:05 compute-0 admiring_lehmann[210157]: ]
Dec  3 01:20:05 compute-0 systemd[1]: libpod-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope: Deactivated successfully.
Dec  3 01:20:05 compute-0 systemd[1]: libpod-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope: Consumed 2.784s CPU time.
Dec  3 01:20:05 compute-0 podman[210141]: 2025-12-03 01:20:05.949141426 +0000 UTC m=+2.931687755 container died 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7ab52b737b7f9cfb514a13cbecdafe2ce643d38ab808fe397ba265f89fec11c-merged.mount: Deactivated successfully.
Dec  3 01:20:06 compute-0 podman[210141]: 2025-12-03 01:20:06.050845196 +0000 UTC m=+3.033391495 container remove 7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:20:06 compute-0 systemd[1]: libpod-conmon-7c7d1b4342e98786527b033f71a4fb20a8de0ae4b1a7afd997109df77e08e636.scope: Deactivated successfully.
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1e4e1413-00da-47c0-9b21-f54446e7d26e does not exist
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d4e92ed5-a8c5-404c-8983-429b6c6b8ca7 does not exist
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d1fce79c-9b8c-48ed-ba2d-93eda69b7fc5 does not exist
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:06 compute-0 podman[212518]: 2025-12-03 01:20:06.840976464 +0000 UTC m=+0.124026424 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  3 01:20:07 compute-0 ceph-mon[192821]: Adjusting osd_memory_target on compute-0 to 43688k
Dec  3 01:20:07 compute-0 ceph-mon[192821]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.287807782 +0000 UTC m=+0.086751487 container create 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.260680376 +0000 UTC m=+0.059624161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:07 compute-0 systemd[1]: Started libpod-conmon-528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1.scope.
Dec  3 01:20:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.426777169 +0000 UTC m=+0.225720904 container init 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.445031197 +0000 UTC m=+0.243974922 container start 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.452690544 +0000 UTC m=+0.251634329 container attach 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 01:20:07 compute-0 quirky_bassi[212596]: 167 167
Dec  3 01:20:07 compute-0 systemd[1]: libpod-528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1.scope: Deactivated successfully.
Dec  3 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.458357749 +0000 UTC m=+0.257301474 container died 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc3ef435dc3f689e2862609141e295dc2d2ef7481c3bd13bb4694af95f96f05a-merged.mount: Deactivated successfully.
Dec  3 01:20:07 compute-0 podman[212580]: 2025-12-03 01:20:07.545013254 +0000 UTC m=+0.343956979 container remove 528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bassi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:07 compute-0 systemd[1]: libpod-conmon-528583b4d8206f2dbc5e845381659b87481c160cfcc94066d5aa531d8236afd1.scope: Deactivated successfully.
Dec  3 01:20:07 compute-0 podman[212618]: 2025-12-03 01:20:07.831198489 +0000 UTC m=+0.084779887 container create c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:20:07 compute-0 podman[212618]: 2025-12-03 01:20:07.800824779 +0000 UTC m=+0.054406187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:07 compute-0 systemd[1]: Started libpod-conmon-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope.
Dec  3 01:20:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:08 compute-0 podman[212618]: 2025-12-03 01:20:08.063006048 +0000 UTC m=+0.316587436 container init c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:08 compute-0 podman[212618]: 2025-12-03 01:20:08.081742159 +0000 UTC m=+0.335323547 container start c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:20:08 compute-0 podman[212618]: 2025-12-03 01:20:08.088445661 +0000 UTC m=+0.342027099 container attach c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:20:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:09 compute-0 exciting_wozniak[212632]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:20:09 compute-0 exciting_wozniak[212632]: --> relative data size: 1.0
Dec  3 01:20:09 compute-0 exciting_wozniak[212632]: --> All data devices are unavailable
Dec  3 01:20:09 compute-0 systemd[1]: libpod-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope: Deactivated successfully.
Dec  3 01:20:09 compute-0 systemd[1]: libpod-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope: Consumed 1.248s CPU time.
Dec  3 01:20:09 compute-0 podman[212661]: 2025-12-03 01:20:09.473452991 +0000 UTC m=+0.065122360 container died c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f16cf92c052412f540d7b058976f51f5e99ce8259d19584f2c6b22134c967908-merged.mount: Deactivated successfully.
Dec  3 01:20:09 compute-0 podman[212661]: 2025-12-03 01:20:09.585002345 +0000 UTC m=+0.176671664 container remove c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wozniak, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:09 compute-0 systemd[1]: libpod-conmon-c41174541bb25bddc099f35d918c3bf301acef7917e21a362a7c6be165d0b868.scope: Deactivated successfully.
Dec  3 01:20:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.67384445 +0000 UTC m=+0.093664360 container create 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.639884361 +0000 UTC m=+0.059704321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:10 compute-0 systemd[1]: Started libpod-conmon-3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94.scope.
Dec  3 01:20:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.831418655 +0000 UTC m=+0.251238555 container init 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.840695732 +0000 UTC m=+0.260515612 container start 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.846371319 +0000 UTC m=+0.266191199 container attach 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:10 compute-0 competent_lamarr[212831]: 167 167
Dec  3 01:20:10 compute-0 systemd[1]: libpod-3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94.scope: Deactivated successfully.
Dec  3 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.848798366 +0000 UTC m=+0.268618256 container died 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec  3 01:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f2781f6e14009bb92163540a7af863f660d57f7d7531a00ee8d09f6e429645-merged.mount: Deactivated successfully.
Dec  3 01:20:10 compute-0 podman[212815]: 2025-12-03 01:20:10.891938588 +0000 UTC m=+0.311758458 container remove 3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lamarr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:20:10 compute-0 podman[212832]: 2025-12-03 01:20:10.89634142 +0000 UTC m=+0.103547473 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, version=9.4, io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 01:20:10 compute-0 systemd[1]: libpod-conmon-3f7c3cbf67d667eb05aa2367cfa67993adc90a6653a19b7db901db32755d6a94.scope: Deactivated successfully.
Dec  3 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.165176201 +0000 UTC m=+0.097728723 container create 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:20:11 compute-0 systemd[1]: Started libpod-conmon-57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b.scope.
Dec  3 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.131356146 +0000 UTC m=+0.063908748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.294348541 +0000 UTC m=+0.226901123 container init 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.310473227 +0000 UTC m=+0.243025779 container start 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:11 compute-0 podman[212874]: 2025-12-03 01:20:11.317295045 +0000 UTC m=+0.249847657 container attach 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:11 compute-0 python3[212916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.519054622 +0000 UTC m=+0.107765190 container create e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.486543263 +0000 UTC m=+0.075253851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:11 compute-0 systemd[1]: Started libpod-conmon-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope.
Dec  3 01:20:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.692176807 +0000 UTC m=+0.280887355 container init e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.709396203 +0000 UTC m=+0.298106741 container start e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:11 compute-0 podman[212921]: 2025-12-03 01:20:11.713548948 +0000 UTC m=+0.302259486 container attach e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:12 compute-0 interesting_feynman[212914]: {
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:    "0": [
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:        {
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "devices": [
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "/dev/loop3"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            ],
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_name": "ceph_lv0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_size": "21470642176",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "name": "ceph_lv0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "tags": {
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.crush_device_class": "",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.encrypted": "0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osd_id": "0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.type": "block",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.vdo": "0"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            },
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "type": "block",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "vg_name": "ceph_vg0"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:        }
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:    ],
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:    "1": [
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:        {
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "devices": [
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "/dev/loop4"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            ],
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_name": "ceph_lv1",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_size": "21470642176",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "name": "ceph_lv1",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "tags": {
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.crush_device_class": "",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.encrypted": "0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osd_id": "1",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.type": "block",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.vdo": "0"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            },
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "type": "block",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "vg_name": "ceph_vg1"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:        }
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:    ],
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:    "2": [
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:        {
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "devices": [
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "/dev/loop5"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            ],
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_name": "ceph_lv2",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_size": "21470642176",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "name": "ceph_lv2",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "tags": {
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.crush_device_class": "",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.encrypted": "0",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osd_id": "2",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.type": "block",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:                "ceph.vdo": "0"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            },
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "type": "block",
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:            "vg_name": "ceph_vg2"
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:        }
Dec  3 01:20:12 compute-0 interesting_feynman[212914]:    ]
Dec  3 01:20:12 compute-0 interesting_feynman[212914]: }
Dec  3 01:20:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:12 compute-0 systemd[1]: libpod-57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b.scope: Deactivated successfully.
Dec  3 01:20:12 compute-0 podman[212965]: 2025-12-03 01:20:12.283542473 +0000 UTC m=+0.063516877 container died 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e757aeebc4ad4e8857fff26b8413decdd763e271b2dc8a5923c9b7c5de83bf-merged.mount: Deactivated successfully.
Dec  3 01:20:12 compute-0 podman[212965]: 2025-12-03 01:20:12.382905949 +0000 UTC m=+0.162880323 container remove 57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:20:12 compute-0 systemd[1]: libpod-conmon-57ff9b70e1cbac0c37b4d23189a52a7c0e0c8b1849a63d351ce3f265a00f3c0b.scope: Deactivated successfully.
Dec  3 01:20:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 01:20:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019109794' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 01:20:12 compute-0 charming_elion[212938]: 
Dec  3 01:20:12 compute-0 charming_elion[212938]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":152,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1764724803,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502763520,"bytes_avail":63909163008,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T01:19:30.172459+0000","services":{}},"progress_events":{}}
Dec  3 01:20:12 compute-0 systemd[1]: libpod-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope: Deactivated successfully.
Dec  3 01:20:12 compute-0 conmon[212938]: conmon e71d81812b6e7bbe8be8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope/container/memory.events
Dec  3 01:20:12 compute-0 podman[213001]: 2025-12-03 01:20:12.553441943 +0000 UTC m=+0.030948856 container died e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a812bebaab9b4c960137747810f3eacbd2d67c439645d3f5df1ad0257ce14df-merged.mount: Deactivated successfully.
Dec  3 01:20:12 compute-0 podman[213001]: 2025-12-03 01:20:12.62024935 +0000 UTC m=+0.097756223 container remove e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d (image=quay.io/ceph/ceph:v18, name=charming_elion, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:20:12 compute-0 systemd[1]: libpod-conmon-e71d81812b6e7bbe8be864e4b8b914097ae7df18f1bb6534243eb3eb10e11b4d.scope: Deactivated successfully.
Dec  3 01:20:13 compute-0 python3[213143]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.2968444 +0000 UTC m=+0.084612209 container create 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.260631299 +0000 UTC m=+0.048399158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.35834358 +0000 UTC m=+0.105297331 container create ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:20:13 compute-0 systemd[1]: Started libpod-conmon-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope.
Dec  3 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.321112391 +0000 UTC m=+0.068066212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:13 compute-0 systemd[1]: Started libpod-conmon-ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7.scope.
Dec  3 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.455212847 +0000 UTC m=+0.242980676 container init 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:20:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.466934031 +0000 UTC m=+0.254701820 container start 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7dd8be0fd2fb49e4280012725222dc15bd05df388591e567811b5b38c1a985/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af7dd8be0fd2fb49e4280012725222dc15bd05df388591e567811b5b38c1a985/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.472758202 +0000 UTC m=+0.260526071 container attach 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:13 compute-0 peaceful_ellis[213184]: 167 167
Dec  3 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.498142934 +0000 UTC m=+0.245096705 container init ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:20:13 compute-0 systemd[1]: libpod-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope: Deactivated successfully.
Dec  3 01:20:13 compute-0 conmon[213184]: conmon 85f16529f36546b57690 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope/container/memory.events
Dec  3 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.509916179 +0000 UTC m=+0.297683998 container died 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.515814702 +0000 UTC m=+0.262768453 container start ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 01:20:13 compute-0 podman[213164]: 2025-12-03 01:20:13.532069042 +0000 UTC m=+0.279022873 container attach ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-10cdad83084b65ea5d8d3a4d31a0823c6e803f73a363595423f7732fdfd6d105-merged.mount: Deactivated successfully.
Dec  3 01:20:13 compute-0 podman[213157]: 2025-12-03 01:20:13.572961842 +0000 UTC m=+0.360729641 container remove 85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:13 compute-0 systemd[1]: libpod-conmon-85f16529f36546b576908e08d973ee97972d657849717bfacd43b7aabe2167d9.scope: Deactivated successfully.
Dec  3 01:20:13 compute-0 podman[213214]: 2025-12-03 01:20:13.876933414 +0000 UTC m=+0.100857509 container create 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:13 compute-0 podman[213214]: 2025-12-03 01:20:13.836678521 +0000 UTC m=+0.060602696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:13 compute-0 systemd[1]: Started libpod-conmon-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope.
Dec  3 01:20:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:14 compute-0 podman[213214]: 2025-12-03 01:20:14.074417633 +0000 UTC m=+0.298341758 container init 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:14 compute-0 podman[213214]: 2025-12-03 01:20:14.086969149 +0000 UTC m=+0.310893254 container start 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 01:20:14 compute-0 podman[213214]: 2025-12-03 01:20:14.092704458 +0000 UTC m=+0.316628553 container attach 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:20:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 01:20:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  3 01:20:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec  3 01:20:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec  3 01:20:14 compute-0 gifted_feistel[213189]: pool 'vms' created
Dec  3 01:20:14 compute-0 systemd[1]: libpod-ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7.scope: Deactivated successfully.
Dec  3 01:20:14 compute-0 podman[213164]: 2025-12-03 01:20:14.363417441 +0000 UTC m=+1.110371212 container died ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-af7dd8be0fd2fb49e4280012725222dc15bd05df388591e567811b5b38c1a985-merged.mount: Deactivated successfully.
Dec  3 01:20:14 compute-0 podman[213164]: 2025-12-03 01:20:14.455194027 +0000 UTC m=+1.202147778 container remove ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7 (image=quay.io/ceph/ceph:v18, name=gifted_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:14 compute-0 systemd[1]: libpod-conmon-ba8053ceda77b425ecab812c516e4174762d1110dc1ed6d77d09c49b4f5d28c7.scope: Deactivated successfully.
Dec  3 01:20:14 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:14 compute-0 python3[213292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:14 compute-0 podman[213294]: 2025-12-03 01:20:14.96469063 +0000 UTC m=+0.097939908 container create 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:14.920755566 +0000 UTC m=+0.054004884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:15 compute-0 systemd[1]: Started libpod-conmon-97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde.scope.
Dec  3 01:20:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336f9fbdf7ba402aa31e58ac54c1c0b9e55f4e4353cdb1981d4a1220b3542170/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336f9fbdf7ba402aa31e58ac54c1c0b9e55f4e4353cdb1981d4a1220b3542170/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:15.125810563 +0000 UTC m=+0.259059851 container init 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:15.167862286 +0000 UTC m=+0.301111534 container start 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:15 compute-0 podman[213294]: 2025-12-03 01:20:15.172985648 +0000 UTC m=+0.306234896 container attach 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:15 compute-0 eager_gould[213248]: {
Dec  3 01:20:15 compute-0 eager_gould[213248]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "osd_id": 2,
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "type": "bluestore"
Dec  3 01:20:15 compute-0 eager_gould[213248]:    },
Dec  3 01:20:15 compute-0 eager_gould[213248]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "osd_id": 1,
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "type": "bluestore"
Dec  3 01:20:15 compute-0 eager_gould[213248]:    },
Dec  3 01:20:15 compute-0 eager_gould[213248]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "osd_id": 0,
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:20:15 compute-0 eager_gould[213248]:        "type": "bluestore"
Dec  3 01:20:15 compute-0 eager_gould[213248]:    }
Dec  3 01:20:15 compute-0 eager_gould[213248]: }
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec  3 01:20:15 compute-0 systemd[1]: libpod-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope: Deactivated successfully.
Dec  3 01:20:15 compute-0 systemd[1]: libpod-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope: Consumed 1.232s CPU time.
Dec  3 01:20:15 compute-0 podman[213214]: 2025-12-03 01:20:15.324982409 +0000 UTC m=+1.548906534 container died 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:15 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3160741069' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-490cab92d943df89fed96029ab79ef64af6bc1d6cc5f415c58190cd38e03fc54-merged.mount: Deactivated successfully.
Dec  3 01:20:15 compute-0 podman[213214]: 2025-12-03 01:20:15.418162984 +0000 UTC m=+1.642087079 container remove 6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:15 compute-0 systemd[1]: libpod-conmon-6c30b6b9b5581ec410080795ac5ae151cadce7b7e8c31a167e201e0d3b811064.scope: Deactivated successfully.
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:15 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 01:20:15 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  3 01:20:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:15 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 01:20:15 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 01:20:16 compute-0 podman[213447]: 2025-12-03 01:20:16.141423906 +0000 UTC m=+0.118277621 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:20:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:16 compute-0 ceph-mon[192821]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 01:20:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  3 01:20:16 compute-0 ceph-mon[192821]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 01:20:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec  3 01:20:16 compute-0 keen_turing[213317]: pool 'volumes' created
Dec  3 01:20:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec  3 01:20:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:16 compute-0 systemd[1]: libpod-97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde.scope: Deactivated successfully.
Dec  3 01:20:16 compute-0 podman[213294]: 2025-12-03 01:20:16.385732478 +0000 UTC m=+1.518981746 container died 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-336f9fbdf7ba402aa31e58ac54c1c0b9e55f4e4353cdb1981d4a1220b3542170-merged.mount: Deactivated successfully.
Dec  3 01:20:16 compute-0 podman[213294]: 2025-12-03 01:20:16.47623864 +0000 UTC m=+1.609487888 container remove 97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde (image=quay.io/ceph/ceph:v18, name=keen_turing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:20:16 compute-0 systemd[1]: libpod-conmon-97f2c97c529edcb3d31f5eaecf8515dcb651795804cb837bea0ab37a255addde.scope: Deactivated successfully.
Dec  3 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.804891204 +0000 UTC m=+0.078883371 container create 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.771068539 +0000 UTC m=+0.045060756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:16 compute-0 systemd[1]: Started libpod-conmon-5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028.scope.
Dec  3 01:20:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:16 compute-0 python3[213608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.962725946 +0000 UTC m=+0.236718173 container init 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.981120724 +0000 UTC m=+0.255112881 container start 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.987918612 +0000 UTC m=+0.261910779 container attach 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:16 compute-0 peaceful_liskov[213614]: 167 167
Dec  3 01:20:16 compute-0 podman[213593]: 2025-12-03 01:20:16.996298914 +0000 UTC m=+0.270291081 container died 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 01:20:16 compute-0 systemd[1]: libpod-5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028.scope: Deactivated successfully.
Dec  3 01:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0dcfa42131ede9a60c2df56d0b692e65bd1abab8f5d7013bbbe28fc38aaba5-merged.mount: Deactivated successfully.
Dec  3 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.078578578 +0000 UTC m=+0.112114780 container create b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:17 compute-0 podman[213593]: 2025-12-03 01:20:17.088303417 +0000 UTC m=+0.362295574 container remove 5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:17 compute-0 systemd[1]: libpod-conmon-5163fd9371585a5ee18531f76f39d44654a042fbbb25c12770715335bffc7028.scope: Deactivated successfully.
Dec  3 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.037651527 +0000 UTC m=+0.071187869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:17 compute-0 systemd[1]: Started libpod-conmon-b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89.scope.
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a667f94fea81cd0c0836a2e9324becad684bb3f33879fe74a5630bdc5273c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a667f94fea81cd0c0836a2e9324becad684bb3f33879fe74a5630bdc5273c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:17 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rysove (unknown last config time)...
Dec  3 01:20:17 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rysove (unknown last config time)...
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rysove", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  3 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rysove", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:17 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rysove on compute-0
Dec  3 01:20:17 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rysove on compute-0
Dec  3 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.201062253 +0000 UTC m=+0.234598495 container init b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.211955344 +0000 UTC m=+0.245491536 container start b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:20:17 compute-0 podman[213617]: 2025-12-03 01:20:17.21614675 +0000 UTC m=+0.249682992 container attach b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  3 01:20:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4279297250' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:17 compute-0 ceph-mon[192821]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rysove", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec  3 01:20:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec  3 01:20:17 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 01:20:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:17 compute-0 podman[213789]: 2025-12-03 01:20:17.951176687 +0000 UTC m=+0.088386194 container create 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:18 compute-0 systemd[1]: Started libpod-conmon-8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2.scope.
Dec  3 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:17.923288306 +0000 UTC m=+0.060497833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.078168307 +0000 UTC m=+0.215377874 container init 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.093199823 +0000 UTC m=+0.230409340 container start 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.099706392 +0000 UTC m=+0.236915949 container attach 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:20:18 compute-0 wonderful_heyrovsky[213805]: 167 167
Dec  3 01:20:18 compute-0 systemd[1]: libpod-8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2.scope: Deactivated successfully.
Dec  3 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.105258846 +0000 UTC m=+0.242468363 container died 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-763bd738ce2462d984813cdbe3306af05f5eaac7499ea093a95e5079e6fe0b45-merged.mount: Deactivated successfully.
Dec  3 01:20:18 compute-0 podman[213789]: 2025-12-03 01:20:18.177163923 +0000 UTC m=+0.314373430 container remove 8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:18 compute-0 systemd[1]: libpod-conmon-8fe83601e72151583e61efa0f1cfa76c8624b0ac8d38f9597e5ca1e7aecf13f2.scope: Deactivated successfully.
Dec  3 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  3 01:20:18 compute-0 ceph-mon[192821]: Reconfiguring mgr.compute-0.rysove (unknown last config time)...
Dec  3 01:20:18 compute-0 ceph-mon[192821]: Reconfiguring daemon mgr.compute-0.rysove on compute-0
Dec  3 01:20:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec  3 01:20:18 compute-0 festive_curran[213647]: pool 'backups' created
Dec  3 01:20:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec  3 01:20:18 compute-0 systemd[1]: libpod-b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89.scope: Deactivated successfully.
Dec  3 01:20:18 compute-0 podman[213617]: 2025-12-03 01:20:18.463737994 +0000 UTC m=+1.497274226 container died b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-15a667f94fea81cd0c0836a2e9324becad684bb3f33879fe74a5630bdc5273c1-merged.mount: Deactivated successfully.
Dec  3 01:20:18 compute-0 podman[213617]: 2025-12-03 01:20:18.567243315 +0000 UTC m=+1.600779547 container remove b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89 (image=quay.io/ceph/ceph:v18, name=festive_curran, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:18 compute-0 systemd[1]: libpod-conmon-b2eb5feb638da1e8765861292155ea5cb22f00196bd7ad88d41efd07c62d3c89.scope: Deactivated successfully.
Dec  3 01:20:19 compute-0 python3[213956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:19 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.116804065 +0000 UTC m=+0.088302071 container create ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.084957825 +0000 UTC m=+0.056455851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:19 compute-0 systemd[1]: Started libpod-conmon-ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba.scope.
Dec  3 01:20:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7288ef81c696e57eb619971cc21707f2074b82193ebf502a7ae0ba8302477478/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7288ef81c696e57eb619971cc21707f2074b82193ebf502a7ae0ba8302477478/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.296040019 +0000 UTC m=+0.267538095 container init ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.310696395 +0000 UTC m=+0.282194381 container start ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:19 compute-0 podman[213959]: 2025-12-03 01:20:19.315674482 +0000 UTC m=+0.287172498 container attach ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  3 01:20:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3136411572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec  3 01:20:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec  3 01:20:19 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 01:20:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:19 compute-0 podman[214058]: 2025-12-03 01:20:19.914515854 +0000 UTC m=+0.153877444 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:20:20 compute-0 podman[214058]: 2025-12-03 01:20:20.057916928 +0000 UTC m=+0.297278458 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v70: 4 pgs: 3 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec  3 01:20:20 compute-0 loving_gates[213995]: pool 'images' created
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec  3 01:20:20 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:20 compute-0 systemd[1]: libpod-ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba.scope: Deactivated successfully.
Dec  3 01:20:20 compute-0 podman[213959]: 2025-12-03 01:20:20.486079182 +0000 UTC m=+1.457577198 container died ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec  3 01:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7288ef81c696e57eb619971cc21707f2074b82193ebf502a7ae0ba8302477478-merged.mount: Deactivated successfully.
Dec  3 01:20:20 compute-0 podman[213959]: 2025-12-03 01:20:20.579277258 +0000 UTC m=+1.550775244 container remove ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba (image=quay.io/ceph/ceph:v18, name=loving_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:20 compute-0 systemd[1]: libpod-conmon-ecb8b249ee6f35c0512ccb3c6f88a4d9db448f9b052070ae4fd252e29705ffba.scope: Deactivated successfully.
Dec  3 01:20:20 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 956a7060-fb19-4748-b8fa-bc36867a42d8 does not exist
Dec  3 01:20:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c9aec575-cde9-47de-a8eb-c6fae78cdcae does not exist
Dec  3 01:20:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1f5bbf98-9013-4c7a-abb8-2c93e75050ee does not exist
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:21 compute-0 python3[214218]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.16363388 +0000 UTC m=+0.091429959 container create beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.121340471 +0000 UTC m=+0.049136550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:21 compute-0 systemd[1]: Started libpod-conmon-beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c.scope.
Dec  3 01:20:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe81900b3b7686f602c366506a0a168ff3208ff96d5b76ab91178c9b6a38a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fe81900b3b7686f602c366506a0a168ff3208ff96d5b76ab91178c9b6a38a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.329372331 +0000 UTC m=+0.257168470 container init beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.352159811 +0000 UTC m=+0.279955910 container start beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:20:21 compute-0 podman[214237]: 2025-12-03 01:20:21.359404571 +0000 UTC m=+0.287200740 container attach beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:20:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  3 01:20:21 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2749902737' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec  3 01:20:21 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec  3 01:20:21 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 01:20:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.114672067 +0000 UTC m=+0.077210225 container create 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.079738011 +0000 UTC m=+0.042276199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:22 compute-0 systemd[1]: Started libpod-conmon-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope.
Dec  3 01:20:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v73: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.262778321 +0000 UTC m=+0.225316529 container init 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.281900239 +0000 UTC m=+0.244438387 container start 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.289283083 +0000 UTC m=+0.251821281 container attach 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:22 compute-0 great_gates[214413]: 167 167
Dec  3 01:20:22 compute-0 systemd[1]: libpod-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope: Deactivated successfully.
Dec  3 01:20:22 compute-0 conmon[214413]: conmon 65628ad404aedd3ced05 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope/container/memory.events
Dec  3 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.294416875 +0000 UTC m=+0.256955033 container died 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a83e669ed51f9890cacebd3a2d0623002fcb0fd2f3b6dd0f77fb5a5cd540e59b-merged.mount: Deactivated successfully.
Dec  3 01:20:22 compute-0 podman[214399]: 2025-12-03 01:20:22.383850167 +0000 UTC m=+0.346388315 container remove 65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:20:22 compute-0 systemd[1]: libpod-conmon-65628ad404aedd3ced05fef920cebc034870f9ef9984d8054b14016eb02f1c6f.scope: Deactivated successfully.
Dec  3 01:20:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  3 01:20:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec  3 01:20:22 compute-0 nice_galileo[214280]: pool 'cephfs.cephfs.meta' created
Dec  3 01:20:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec  3 01:20:22 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:22 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:22 compute-0 systemd[1]: libpod-beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c.scope: Deactivated successfully.
Dec  3 01:20:22 compute-0 podman[214237]: 2025-12-03 01:20:22.553114106 +0000 UTC m=+1.480910215 container died beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-25fe81900b3b7686f602c366506a0a168ff3208ff96d5b76ab91178c9b6a38a7-merged.mount: Deactivated successfully.
Dec  3 01:20:22 compute-0 podman[214237]: 2025-12-03 01:20:22.628729426 +0000 UTC m=+1.556525495 container remove beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c (image=quay.io/ceph/ceph:v18, name=nice_galileo, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:22 compute-0 systemd[1]: libpod-conmon-beac8b425de06037cf2ef690969b73e374621d57e567c8b4a3e33e12d0eedb7c.scope: Deactivated successfully.
Dec  3 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.700491209 +0000 UTC m=+0.074522341 container create c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.673262837 +0000 UTC m=+0.047293979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:22 compute-0 systemd[1]: Started libpod-conmon-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope.
Dec  3 01:20:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.868328668 +0000 UTC m=+0.242359820 container init c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.887074496 +0000 UTC m=+0.261105618 container start c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:20:22 compute-0 podman[214444]: 2025-12-03 01:20:22.89262535 +0000 UTC m=+0.266656462 container attach c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:20:23 compute-0 python3[214493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.211629497 +0000 UTC m=+0.086926864 container create cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.178416649 +0000 UTC m=+0.053714076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:23 compute-0 systemd[1]: Started libpod-conmon-cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0.scope.
Dec  3 01:20:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44867f99faadb6b42b9ac795cbf16a9642e4481ec3ee1683e5d8858c46267282/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44867f99faadb6b42b9ac795cbf16a9642e4481ec3ee1683e5d8858c46267282/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.372499124 +0000 UTC m=+0.247796471 container init cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.388036073 +0000 UTC m=+0.263333430 container start cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:20:23 compute-0 podman[214494]: 2025-12-03 01:20:23.396153608 +0000 UTC m=+0.271450965 container attach cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  3 01:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec  3 01:20:23 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec  3 01:20:23 compute-0 ceph-mon[192821]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:23 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2747225468' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:23 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  3 01:20:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:24 compute-0 musing_bell[214467]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:20:24 compute-0 musing_bell[214467]: --> relative data size: 1.0
Dec  3 01:20:24 compute-0 musing_bell[214467]: --> All data devices are unavailable
Dec  3 01:20:24 compute-0 systemd[1]: libpod-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope: Deactivated successfully.
Dec  3 01:20:24 compute-0 systemd[1]: libpod-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope: Consumed 1.215s CPU time.
Dec  3 01:20:24 compute-0 podman[214444]: 2025-12-03 01:20:24.170385937 +0000 UTC m=+1.544417069 container died c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v76: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-766ff20d30bc2760e555be2ea1d478ac5805e8c7745bd4744feada345ccec964-merged.mount: Deactivated successfully.
Dec  3 01:20:24 compute-0 podman[214444]: 2025-12-03 01:20:24.287835093 +0000 UTC m=+1.661866225 container remove c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:20:24 compute-0 systemd[1]: libpod-conmon-c18804b0584e2aafc0f7356b4c6116a2ed25cb866e155bbc5d20294d23088396.scope: Deactivated successfully.
Dec  3 01:20:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  3 01:20:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec  3 01:20:24 compute-0 interesting_lalande[214509]: pool 'cephfs.cephfs.data' created
Dec  3 01:20:24 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec  3 01:20:24 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  3 01:20:24 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:24 compute-0 systemd[1]: libpod-cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0.scope: Deactivated successfully.
Dec  3 01:20:24 compute-0 podman[214494]: 2025-12-03 01:20:24.590933761 +0000 UTC m=+1.466231158 container died cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 01:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-44867f99faadb6b42b9ac795cbf16a9642e4481ec3ee1683e5d8858c46267282-merged.mount: Deactivated successfully.
Dec  3 01:20:24 compute-0 podman[214494]: 2025-12-03 01:20:24.707647837 +0000 UTC m=+1.582945204 container remove cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0 (image=quay.io/ceph/ceph:v18, name=interesting_lalande, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:24 compute-0 systemd[1]: libpod-conmon-cef72690d4012a8d74aec56d07d52218c3ff33690b9eac7f65ae03d020e929e0.scope: Deactivated successfully.
Dec  3 01:20:25 compute-0 python3[214710]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.262904334 +0000 UTC m=+0.070898670 container create 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:20:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:25 compute-0 systemd[1]: Started libpod-conmon-88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784.scope.
Dec  3 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.240942707 +0000 UTC m=+0.048937093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8caf3d13fb4dbaabd1d462d16482e92982f430aa912cf09951e1d3baf01c219b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8caf3d13fb4dbaabd1d462d16482e92982f430aa912cf09951e1d3baf01c219b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.421888439 +0000 UTC m=+0.229882815 container init 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.44110795 +0000 UTC m=+0.249102346 container start 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:25 compute-0 podman[214729]: 2025-12-03 01:20:25.888360862 +0000 UTC m=+0.696355218 container attach 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  3 01:20:25 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1024770579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 01:20:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec  3 01:20:25 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec  3 01:20:25 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:25 compute-0 podman[214783]: 2025-12-03 01:20:25.998763874 +0000 UTC m=+0.062950711 container create 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:20:26 compute-0 systemd[1]: Started libpod-conmon-7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7.scope.
Dec  3 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:25.980739026 +0000 UTC m=+0.044925903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.144814621 +0000 UTC m=+0.209001498 container init 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.154733285 +0000 UTC m=+0.218920172 container start 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.160459743 +0000 UTC m=+0.224646590 container attach 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:20:26 compute-0 priceless_satoshi[214802]: 167 167
Dec  3 01:20:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Dec  3 01:20:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  3 01:20:26 compute-0 systemd[1]: libpod-7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7.scope: Deactivated successfully.
Dec  3 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.167795976 +0000 UTC m=+0.231982893 container died 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:20:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a346cc584e143eee2b78bb4ac669d0ccfdfa5afdb9d4258fabd06ec7519054-merged.mount: Deactivated successfully.
Dec  3 01:20:26 compute-0 podman[214783]: 2025-12-03 01:20:26.227879407 +0000 UTC m=+0.292066234 container remove 7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:20:26 compute-0 systemd[1]: libpod-conmon-7dfd0b6a92b55e928d8356ae13ef01b2fb0978e01a0d1d7a8b93be51d07744a7.scope: Deactivated successfully.
Dec  3 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.514859019 +0000 UTC m=+0.090119462 container create 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.476208261 +0000 UTC m=+0.051468754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:26 compute-0 systemd[1]: Started libpod-conmon-52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225.scope.
Dec  3 01:20:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.656330309 +0000 UTC m=+0.231590762 container init 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.675215451 +0000 UTC m=+0.250475864 container start 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:26 compute-0 podman[214826]: 2025-12-03 01:20:26.681804943 +0000 UTC m=+0.257065446 container attach 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  3 01:20:26 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  3 01:20:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  3 01:20:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec  3 01:20:26 compute-0 priceless_black[214762]: enabled application 'rbd' on pool 'vms'
Dec  3 01:20:26 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec  3 01:20:26 compute-0 systemd[1]: libpod-88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784.scope: Deactivated successfully.
Dec  3 01:20:26 compute-0 podman[214729]: 2025-12-03 01:20:26.961369911 +0000 UTC m=+1.769364247 container died 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8caf3d13fb4dbaabd1d462d16482e92982f430aa912cf09951e1d3baf01c219b-merged.mount: Deactivated successfully.
Dec  3 01:20:27 compute-0 podman[214729]: 2025-12-03 01:20:27.022868341 +0000 UTC m=+1.830862677 container remove 88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784 (image=quay.io/ceph/ceph:v18, name=priceless_black, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:20:27 compute-0 systemd[1]: libpod-conmon-88b2e3693f701bb75afb5ccc399bcc51975b6c2cf5fc2c482b457e5955faf784.scope: Deactivated successfully.
Dec  3 01:20:27 compute-0 python3[214883]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]: {
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:    "0": [
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:        {
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "devices": [
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "/dev/loop3"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            ],
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_name": "ceph_lv0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_size": "21470642176",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "name": "ceph_lv0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "tags": {
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.crush_device_class": "",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.encrypted": "0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osd_id": "0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.type": "block",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.vdo": "0"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            },
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "type": "block",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "vg_name": "ceph_vg0"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:        }
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:    ],
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:    "1": [
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:        {
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "devices": [
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "/dev/loop4"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            ],
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_name": "ceph_lv1",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_size": "21470642176",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "name": "ceph_lv1",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "tags": {
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.crush_device_class": "",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.encrypted": "0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osd_id": "1",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.type": "block",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.vdo": "0"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            },
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "type": "block",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "vg_name": "ceph_vg1"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:        }
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:    ],
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:    "2": [
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:        {
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "devices": [
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "/dev/loop5"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            ],
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_name": "ceph_lv2",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_size": "21470642176",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "name": "ceph_lv2",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "tags": {
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.crush_device_class": "",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.encrypted": "0",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osd_id": "2",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.type": "block",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:                "ceph.vdo": "0"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            },
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "type": "block",
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:            "vg_name": "ceph_vg2"
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:        }
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]:    ]
Dec  3 01:20:27 compute-0 nifty_visvesvaraya[214842]: }
Dec  3 01:20:27 compute-0 systemd[1]: libpod-52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225.scope: Deactivated successfully.
Dec  3 01:20:27 compute-0 podman[214826]: 2025-12-03 01:20:27.565365585 +0000 UTC m=+1.140625988 container died 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.59666839 +0000 UTC m=+0.084963329 container create 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:20:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb9e7086c02fa76e34c728d4177bdb13c7a5914e581c6899897df53f8f876187-merged.mount: Deactivated successfully.
Dec  3 01:20:27 compute-0 systemd[1]: Started libpod-conmon-428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9.scope.
Dec  3 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.569618553 +0000 UTC m=+0.057913512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:27 compute-0 podman[214826]: 2025-12-03 01:20:27.666186781 +0000 UTC m=+1.241447224 container remove 52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:27 compute-0 systemd[1]: libpod-conmon-52ace7bc7380ed41be1c2e3e4d2725cd1bae10b9502bb06d19223e16b763e225.scope: Deactivated successfully.
Dec  3 01:20:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a43e13a2560c79c53fd0b8f93079cad9dd594fed5248371f9d542eded72656/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a43e13a2560c79c53fd0b8f93079cad9dd594fed5248371f9d542eded72656/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.742893541 +0000 UTC m=+0.231188510 container init 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.755115909 +0000 UTC m=+0.243410848 container start 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:27 compute-0 podman[214888]: 2025-12-03 01:20:27.759893901 +0000 UTC m=+0.248188870 container attach 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:27 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/800152169' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:20:28
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Some PGs (0.142857) are inactive; try again later
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Dec  3 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  3 01:20:28 compute-0 podman[215074]: 2025-12-03 01:20:28.9023707 +0000 UTC m=+0.080758683 container create 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  3 01:20:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:28 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  3 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  3 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  3 01:20:28 compute-0 nervous_allen[214913]: enabled application 'rbd' on pool 'volumes'
Dec  3 01:20:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  3 01:20:28 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev e6b4f978-1441-4303-b9a2-cf3500d39b60 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  3 01:20:28 compute-0 podman[215074]: 2025-12-03 01:20:28.874462939 +0000 UTC m=+0.052850932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:20:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:28 compute-0 systemd[1]: Started libpod-conmon-148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff.scope.
Dec  3 01:20:29 compute-0 podman[214888]: 2025-12-03 01:20:29.005051558 +0000 UTC m=+1.493346517 container died 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:29 compute-0 systemd[1]: libpod-428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9.scope: Deactivated successfully.
Dec  3 01:20:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a43e13a2560c79c53fd0b8f93079cad9dd594fed5248371f9d542eded72656-merged.mount: Deactivated successfully.
Dec  3 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.059321578 +0000 UTC m=+0.237709561 container init 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.070627301 +0000 UTC m=+0.249015264 container start 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:29 compute-0 agitated_kowalevski[215091]: 167 167
Dec  3 01:20:29 compute-0 systemd[1]: libpod-148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff.scope: Deactivated successfully.
Dec  3 01:20:29 compute-0 podman[214888]: 2025-12-03 01:20:29.097221736 +0000 UTC m=+1.585516705 container remove 428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9 (image=quay.io/ceph/ceph:v18, name=nervous_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.109104774 +0000 UTC m=+0.287492767 container attach 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.109458374 +0000 UTC m=+0.287846337 container died 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:29 compute-0 systemd[1]: libpod-conmon-428133f3fad9f17a5df573f9346b06cec814fcfbbb3f15d10ce1d8777cc95cd9.scope: Deactivated successfully.
Dec  3 01:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0863024cf891f2812956fcf4c6309ad0f99b1f32c160d6bffb0b5b67ee9518a1-merged.mount: Deactivated successfully.
Dec  3 01:20:29 compute-0 podman[215074]: 2025-12-03 01:20:29.175847819 +0000 UTC m=+0.354235812 container remove 148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:20:29 compute-0 systemd[1]: libpod-conmon-148f23d5531e171ef277b94ef3b56b2c11e28573efe4abd9fbbd97042d29a0ff.scope: Deactivated successfully.
Dec  3 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.425666344 +0000 UTC m=+0.077023530 container create 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:20:29 compute-0 systemd[1]: Started libpod-conmon-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope.
Dec  3 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.392896608 +0000 UTC m=+0.044253834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.562483726 +0000 UTC m=+0.213840992 container init 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:29 compute-0 python3[215151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.581681946 +0000 UTC m=+0.233039132 container start 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 01:20:29 compute-0 podman[215152]: 2025-12-03 01:20:29.586970212 +0000 UTC m=+0.238327478 container attach 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.695247905 +0000 UTC m=+0.096087887 container create 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.652318129 +0000 UTC m=+0.053158171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:29 compute-0 podman[158098]: time="2025-12-03T01:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:20:29 compute-0 systemd[1]: Started libpod-conmon-148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e.scope.
Dec  3 01:20:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6006de0be103f15edecc6d18b113826f1212d2a94d1fb578be69cb2e402451/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6006de0be103f15edecc6d18b113826f1212d2a94d1fb578be69cb2e402451/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.833584269 +0000 UTC m=+0.234424271 container init 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.851914725 +0000 UTC m=+0.252754707 container start 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:29 compute-0 podman[215173]: 2025-12-03 01:20:29.857439118 +0000 UTC m=+0.258279130 container attach 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32224 "" "Go-http-client/1.1"
Dec  3 01:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6666 "" "Go-http-client/1.1"
Dec  3 01:20:29 compute-0 ceph-mon[192821]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:29 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/520206880' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  3 01:20:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  3 01:20:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  3 01:20:29 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  3 01:20:29 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 78db669c-15ce-4651-8c69-cd7c7014c693 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  3 01:20:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:20:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  3 01:20:30 compute-0 sharp_rubin[215168]: {
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "osd_id": 2,
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "type": "bluestore"
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:    },
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "osd_id": 1,
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "type": "bluestore"
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:    },
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "osd_id": 0,
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:        "type": "bluestore"
Dec  3 01:20:30 compute-0 sharp_rubin[215168]:    }
Dec  3 01:20:30 compute-0 sharp_rubin[215168]: }
Dec  3 01:20:30 compute-0 systemd[1]: libpod-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope: Deactivated successfully.
Dec  3 01:20:30 compute-0 systemd[1]: libpod-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope: Consumed 1.135s CPU time.
Dec  3 01:20:30 compute-0 podman[215152]: 2025-12-03 01:20:30.738502751 +0000 UTC m=+1.389859977 container died 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 01:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbc13ebc97bb64c993f322b387d06e566b2fad50478d2216d535679a36f43b9-merged.mount: Deactivated successfully.
Dec  3 01:20:30 compute-0 podman[215152]: 2025-12-03 01:20:30.852192563 +0000 UTC m=+1.503549769 container remove 22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rubin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:30 compute-0 systemd[1]: libpod-conmon-22701487c8c1db73a0310bdc08e4e2416829e91739d945c2116310ed45a25661.scope: Deactivated successfully.
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  3 01:20:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  3 01:20:30 compute-0 relaxed_nash[215188]: enabled application 'rbd' on pool 'backups'
Dec  3 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:30 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  3 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:31 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.348283768s) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active pruub 43.424728394s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:31 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  3 01:20:31 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=8.348283768s) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown pruub 43.424728394s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:31 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 45b2a057-9edd-4848-82cf-8f672677f39f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  3 01:20:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:20:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:31 compute-0 systemd[1]: libpod-148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e.scope: Deactivated successfully.
Dec  3 01:20:31 compute-0 podman[215173]: 2025-12-03 01:20:31.027430467 +0000 UTC m=+1.428270479 container died 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd6006de0be103f15edecc6d18b113826f1212d2a94d1fb578be69cb2e402451-merged.mount: Deactivated successfully.
Dec  3 01:20:31 compute-0 podman[215173]: 2025-12-03 01:20:31.110259307 +0000 UTC m=+1.511099279 container remove 148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e (image=quay.io/ceph/ceph:v18, name=relaxed_nash, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 01:20:31 compute-0 systemd[1]: libpod-conmon-148cfcc7cb520c7ec528619659e735f018d43204c0ada2e15ec5bbfa092d298e.scope: Deactivated successfully.
Dec  3 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:20:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:20:31 compute-0 openstack_network_exporter[160250]: ERROR   01:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:20:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:20:31 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=9.891637802s) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active pruub 52.819717407s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:31 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=9.891637802s) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown pruub 52.819717407s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  3 01:20:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  3 01:20:32 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 9c7b44ee-6243-4701-a66a-ed63a204737b (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  3 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=18/19 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:32 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/182692636' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  3 01:20:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=20/21 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=18/18 les/c/f=19/19/0 sis=33) [2] r=0 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=33/34 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=20/20 les/c/f=21/21/0 sis=33) [1] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:32 compute-0 podman[215339]: 2025-12-03 01:20:32.111079598 +0000 UTC m=+0.130752475 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:20:32 compute-0 podman[215340]: 2025-12-03 01:20:32.111248773 +0000 UTC m=+0.128078551 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm)
Dec  3 01:20:32 compute-0 podman[215341]: 2025-12-03 01:20:32.141279663 +0000 UTC m=+0.153848804 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 01:20:32 compute-0 python3[215353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:32 compute-0 podman[215342]: 2025-12-03 01:20:32.187036527 +0000 UTC m=+0.181673032 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec  3 01:20:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.217957852 +0000 UTC m=+0.059799124 container create f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:20:32 compute-0 systemd[1]: Started libpod-conmon-f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6.scope.
Dec  3 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.192640212 +0000 UTC m=+0.034481494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18aab917db20725ff4ff03ca3a616a91b4727386fad80b504d170195dd93ad1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18aab917db20725ff4ff03ca3a616a91b4727386fad80b504d170195dd93ad1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.375769134 +0000 UTC m=+0.217610436 container init f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.393946936 +0000 UTC m=+0.235788208 container start f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:32 compute-0 podman[215418]: 2025-12-03 01:20:32.398941594 +0000 UTC m=+0.240782936 container attach f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Dec  3 01:20:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Dec  3 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Dec  3 01:20:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  3 01:20:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  3 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  3 01:20:33 compute-0 loving_ellis[215437]: enabled application 'rbd' on pool 'images'
Dec  3 01:20:33 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  3 01:20:33 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 38621d60-c54c-4259-aa9c-3be614ce8469 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  3 01:20:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:20:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:33 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=12.498203278s) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active pruub 49.587493896s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:33 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35 pruub=12.498203278s) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown pruub 49.587493896s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/854089705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  3 01:20:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:20:33 compute-0 systemd[1]: libpod-f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6.scope: Deactivated successfully.
Dec  3 01:20:33 compute-0 podman[215418]: 2025-12-03 01:20:33.050151424 +0000 UTC m=+0.891992726 container died f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f18aab917db20725ff4ff03ca3a616a91b4727386fad80b504d170195dd93ad1-merged.mount: Deactivated successfully.
Dec  3 01:20:33 compute-0 podman[215418]: 2025-12-03 01:20:33.137888839 +0000 UTC m=+0.979730111 container remove f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6 (image=quay.io/ceph/ceph:v18, name=loving_ellis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:20:33 compute-0 systemd[1]: libpod-conmon-f5c394354b0978548db45487f61dbecf4d2934c376c95cf75350da910f3734a6.scope: Deactivated successfully.
Dec  3 01:20:33 compute-0 ceph-mgr[193109]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec  3 01:20:33 compute-0 python3[215501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.613158886 +0000 UTC m=+0.065704787 container create df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.58761553 +0000 UTC m=+0.040161411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:33 compute-0 systemd[1]: Started libpod-conmon-df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf.scope.
Dec  3 01:20:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  3 01:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05140ca0b8293fcb4c45a6c5cc9980c8bb6e17c98dad3dbee40b836efb9b3863/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05140ca0b8293fcb4c45a6c5cc9980c8bb6e17c98dad3dbee40b836efb9b3863/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  3 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.781497039 +0000 UTC m=+0.234042940 container init df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.801099431 +0000 UTC m=+0.253645322 container start df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:33 compute-0 podman[215502]: 2025-12-03 01:20:33.807818046 +0000 UTC m=+0.260363987 container attach df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  3 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  3 01:20:34 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 26e960c2-8609-46ff-82f5-0728fd2a751c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev e6b4f978-1441-4303-b9a2-cf3500d39b60 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event e6b4f978-1441-4303-b9a2-cf3500d39b60 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 78db669c-15ce-4651-8c69-cd7c7014c693 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 78db669c-15ce-4651-8c69-cd7c7014c693 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 45b2a057-9edd-4848-82cf-8f672677f39f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 45b2a057-9edd-4848-82cf-8f672677f39f (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 9c7b44ee-6243-4701-a66a-ed63a204737b (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 9c7b44ee-6243-4701-a66a-ed63a204737b (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 38621d60-c54c-4259-aa9c-3be614ce8469 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 38621d60-c54c-4259-aa9c-3be614ce8469 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 26e960c2-8609-46ff-82f5-0728fd2a751c (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 26e960c2-8609-46ff-82f5-0728fd2a751c (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=24/25 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=24/24 les/c/f=25/25/0 sis=35) [2] r=0 lpr=35 pi=[24,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 1 peering, 93 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Dec  3 01:20:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  3 01:20:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  3 01:20:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:35 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  3 01:20:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  3 01:20:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  3 01:20:35 compute-0 naughty_satoshi[215517]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  3 01:20:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  3 01:20:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37 pruub=14.834046364s) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active pruub 61.296497345s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37 pruub=14.834046364s) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown pruub 61.296497345s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 systemd[1]: libpod-df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf.scope: Deactivated successfully.
Dec  3 01:20:35 compute-0 podman[215545]: 2025-12-03 01:20:35.210033044 +0000 UTC m=+0.052710858 container died df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-05140ca0b8293fcb4c45a6c5cc9980c8bb6e17c98dad3dbee40b836efb9b3863-merged.mount: Deactivated successfully.
Dec  3 01:20:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:35 compute-0 podman[215545]: 2025-12-03 01:20:35.299770594 +0000 UTC m=+0.142448368 container remove df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf (image=quay.io/ceph/ceph:v18, name=naughty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec  3 01:20:35 compute-0 systemd[1]: libpod-conmon-df4957ae99038534927c5faa895a568023a053bf7ce6515c08f191d1eac790cf.scope: Deactivated successfully.
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35 pruub=15.796587944s) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active pruub 69.264137268s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35 pruub=15.796587944s) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown pruub 69.264137268s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:35 compute-0 python3[215586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:35 compute-0 podman[215587]: 2025-12-03 01:20:35.836036787 +0000 UTC m=+0.090663767 container create d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:20:35 compute-0 podman[215587]: 2025-12-03 01:20:35.805429221 +0000 UTC m=+0.060056241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:35 compute-0 systemd[1]: Started libpod-conmon-d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee.scope.
Dec  3 01:20:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a39640ca3fd2ad93ebdbce36d98d33b225a9f773d8a189c4c85c93e4e07e35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a39640ca3fd2ad93ebdbce36d98d33b225a9f773d8a189c4c85c93e4e07e35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:35 compute-0 podman[215587]: 2025-12-03 01:20:35.99244145 +0000 UTC m=+0.247068480 container init d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:20:36 compute-0 podman[215587]: 2025-12-03 01:20:36.003513086 +0000 UTC m=+0.258140066 container start d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 01:20:36 compute-0 podman[215587]: 2025-12-03 01:20:36.010474188 +0000 UTC m=+0.265101248 container attach d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:20:36 compute-0 ceph-mon[192821]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:20:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:20:36 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/4081602292' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  3 01:20:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  3 01:20:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  3 01:20:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=11.445006371s) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 65.343589783s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=11.445006371s) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown pruub 65.343589783s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=35/38 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [0] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=28/29 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=28/28 les/c/f=29/29/0 sis=37) [1] r=0 lpr=37 pi=[28,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 2 peering, 124 unknown, 67 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Dec  3 01:20:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  3 01:20:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  3 01:20:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  3 01:20:36 compute-0 systemd[194622]: Starting Mark boot as successful...
Dec  3 01:20:36 compute-0 systemd[194622]: Finished Mark boot as successful.
Dec  3 01:20:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  3 01:20:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  3 01:20:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  3 01:20:37 compute-0 unruffled_hopper[215602]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  3 01:20:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  3 01:20:37 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1a( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.10( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.16( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.12( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=37/39 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.18( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.19( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [0] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:37 compute-0 systemd[1]: libpod-d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee.scope: Deactivated successfully.
Dec  3 01:20:37 compute-0 podman[215587]: 2025-12-03 01:20:37.144293887 +0000 UTC m=+1.398920867 container died d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a39640ca3fd2ad93ebdbce36d98d33b225a9f773d8a189c4c85c93e4e07e35-merged.mount: Deactivated successfully.
Dec  3 01:20:37 compute-0 podman[215587]: 2025-12-03 01:20:37.23699905 +0000 UTC m=+1.491626040 container remove d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee (image=quay.io/ceph/ceph:v18, name=unruffled_hopper, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:20:37 compute-0 systemd[1]: libpod-conmon-d423937529564ceafb15dbcdf789485855897671d0cabb80566a1c4f9ece66ee.scope: Deactivated successfully.
Dec  3 01:20:37 compute-0 podman[215629]: 2025-12-03 01:20:37.355507305 +0000 UTC m=+0.165372162 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 01:20:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  3 01:20:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  3 01:20:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  3 01:20:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  3 01:20:38 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/1032722943' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  3 01:20:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 2 peering, 124 unknown, 67 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:38 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 9 completed events
Dec  3 01:20:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 01:20:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:38 compute-0 python3[215733]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:20:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec  3 01:20:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec  3 01:20:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  3 01:20:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 01:20:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:39 compute-0 python3[215804]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724838.1950521-37122-278134840308959/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:20:40 compute-0 ceph-mon[192821]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  3 01:20:40 compute-0 ceph-mon[192821]: Cluster is now healthy
Dec  3 01:20:40 compute-0 python3[215906]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:20:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:40 compute-0 python3[215981]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724839.6590133-37136-212816261421466/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=125cc056cc8761ce32a20a6ad2f9158e18b24cbb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:20:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec  3 01:20:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec  3 01:20:41 compute-0 python3[216032]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:41 compute-0 podman[216031]: 2025-12-03 01:20:41.395215623 +0000 UTC m=+0.160136007 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Dec  3 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.526090261 +0000 UTC m=+0.097121656 container create cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.487820933 +0000 UTC m=+0.058852378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:41 compute-0 systemd[1]: Started libpod-conmon-cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73.scope.
Dec  3 01:20:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.713010167 +0000 UTC m=+0.284041632 container init cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.738673947 +0000 UTC m=+0.309705352 container start cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec  3 01:20:41 compute-0 podman[216052]: 2025-12-03 01:20:41.746079271 +0000 UTC m=+0.317110736 container attach cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 01:20:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 01:20:42 compute-0 fervent_khayyam[216067]: 
Dec  3 01:20:42 compute-0 fervent_khayyam[216067]: [global]
Dec  3 01:20:42 compute-0 fervent_khayyam[216067]: #011fsid = 3765feb2-36f8-5b86-b74c-64e9221f9c4c
Dec  3 01:20:42 compute-0 fervent_khayyam[216067]: #011mon_host = 192.168.122.100
Dec  3 01:20:42 compute-0 systemd[1]: libpod-cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73.scope: Deactivated successfully.
Dec  3 01:20:42 compute-0 podman[216052]: 2025-12-03 01:20:42.380336052 +0000 UTC m=+0.951367457 container died cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e0692aa21b3be1db5a671eb30a5a97d67f84c7d74e8b0efedd33e83f2fe1901-merged.mount: Deactivated successfully.
Dec  3 01:20:42 compute-0 podman[216052]: 2025-12-03 01:20:42.466876023 +0000 UTC m=+1.037907398 container remove cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73 (image=quay.io/ceph/ceph:v18, name=fervent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:20:42 compute-0 systemd[1]: libpod-conmon-cc2b0a1b6596b98ddba5a1e6703119efe39ae4cf295fab386526263d7a7d7f73.scope: Deactivated successfully.
Dec  3 01:20:42 compute-0 python3[216185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:42 compute-0 podman[216227]: 2025-12-03 01:20:42.958204014 +0000 UTC m=+0.084368243 container create 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:42.928202135 +0000 UTC m=+0.054366454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:43 compute-0 systemd[1]: Started libpod-conmon-4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6.scope.
Dec  3 01:20:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.126964019 +0000 UTC m=+0.253128328 container init 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.149230384 +0000 UTC m=+0.275394603 container start 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.155066785 +0000 UTC m=+0.281231084 container attach 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  3 01:20:43 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2484226444' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925077438s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928672791s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925030708s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928672791s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909792900s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913459778s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925196648s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928901672s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909742355s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913459778s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925130844s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928901672s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925121307s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928962708s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925107002s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928962708s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909637451s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913513184s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909616470s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913513184s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925292969s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929244995s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909537315s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913520813s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.925277710s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929244995s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909441948s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913558960s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909496307s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913520813s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909509659s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913543701s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909420967s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913558960s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924942970s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929168701s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924926758s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929168701s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909379959s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913543701s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909258842s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913566589s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924689293s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929031372s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924675941s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929031372s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909224510s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913566589s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924533844s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.928985596s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909150124s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913589478s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924518585s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.928985596s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909098625s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913597107s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909090042s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913589478s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924591064s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929130554s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.909068108s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913597107s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924578667s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929130554s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924288750s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929046631s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924226761s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929046631s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908742905s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913658142s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908735275s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913711548s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908703804s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913658142s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908716202s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913711548s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924112320s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929260254s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908642769s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913795471s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924101830s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929275513s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924075127s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929260254s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924107552s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929344177s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924056053s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929275513s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.924077988s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929344177s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923935890s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929382324s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923906326s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929382324s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908274651s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913772583s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908241272s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913772583s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908620834s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913795471s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908068657s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913757324s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923716545s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929435730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908052444s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913780212s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923698425s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929435730s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908040047s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913757324s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908013344s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913780212s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908015251s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913810730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.908002853s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913810730s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907896042s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913749695s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907869339s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913749695s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923476219s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929389954s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907832146s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913856506s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907732010s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913856506s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923254967s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929473877s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923239708s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929473877s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907852173s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913894653s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923446655s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929389954s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.931925774s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.938255310s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907569885s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913894653s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907341957s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913673401s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.931911469s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.938255310s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907565117s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 69.913917542s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907526970s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913917542s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/38 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40 pruub=8.907274246s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.913673401s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.923031807s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929504395s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.922818184s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 70.929519653s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.922801971s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929519653s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=9.922622681s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.929504395s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809853554s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112083435s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809832573s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112083435s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826898575s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.129302979s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826884270s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.129302979s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.812939644s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.115432739s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.812928200s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.115432739s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809509277s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112071991s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809497833s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112071991s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809473038s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112117767s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809461594s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112117767s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809336662s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112068176s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.809015274s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112060547s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.808982849s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112060547s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825245857s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.128501892s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825227737s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.128501892s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827450752s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.130931854s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827425957s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.130931854s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.808449745s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.112056732s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825565338s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.129222870s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.808361053s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112056732s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.802107811s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105953217s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825403214s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.129222870s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.802091599s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105953217s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826066017s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.130004883s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826048851s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.130004883s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825430870s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.129425049s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825416565s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.129425049s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801868439s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105918884s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801853180s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105918884s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827801704s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131893158s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827788353s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131893158s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801709175s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105876923s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801693916s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105876923s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825785637s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.130012512s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.825771332s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.130012512s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801587105s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105865479s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801566124s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105865479s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827269554s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131668091s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827250481s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131668091s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801406860s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105854034s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.801389694s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105854034s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800129890s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104690552s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800110817s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104690552s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827075005s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131679535s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.827056885s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131679535s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799907684s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104595184s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826981544s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131687164s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799892426s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104595184s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826968193s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131687164s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826895714s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131694794s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826881409s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131694794s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799719810s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104568481s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799695015s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104568481s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826835632s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131740570s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826821327s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131740570s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800139427s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.105125427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.800117493s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.105125427s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826739311s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131759644s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826723099s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131759644s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799365044s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104473114s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826889038s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.132007599s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.799349785s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104473114s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826873779s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.132007599s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798804283s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104057312s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798768044s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104045868s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798783302s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104057312s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798748016s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104045868s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826456070s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131896973s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798575401s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104038239s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826438904s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131896973s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798559189s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104038239s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798506737s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104076385s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826265335s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131843567s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826251030s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131843567s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798459053s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104076385s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826229095s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131900787s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826214790s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131900787s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.796794891s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.102523804s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.796780586s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.102523804s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826092720s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 62.131889343s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=14.826077461s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.131889343s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798119545s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104064941s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798089027s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104064941s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798554420s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 60.104587555s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.798313141s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.104587555s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.797544479s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.112068176s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873694420s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488307953s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873666763s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488307953s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795949936s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410713196s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795934677s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410713196s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.796019554s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410881042s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795972824s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410881042s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873249054s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488361359s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.873228073s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488361359s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795376778s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410636902s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795359612s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410636902s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795269966s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410652161s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.795253754s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410652161s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872805595s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488334656s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872785568s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488334656s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794851303s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410591125s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794834137s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410591125s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794728279s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410583496s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794713020s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410583496s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872503281s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488487244s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872488022s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488487244s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794487000s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410644531s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794467926s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410644531s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794275284s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410598755s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.794255257s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410598755s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872102737s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488574982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.872078896s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488574982s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871980667s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488578796s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871963501s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488578796s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.793778419s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410545349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.793760300s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410545349s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871706009s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488594055s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871688843s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488594055s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871526718s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488616943s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871507645s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488616943s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871363640s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488620758s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871346474s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488620758s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871274948s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488651276s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.871257782s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488651276s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.792291641s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409851074s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.792263985s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409851074s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870978355s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488723755s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870955467s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488723755s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791929245s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409812927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791883469s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409812927s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870505333s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489326477s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.870460510s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489326477s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791410446s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410537720s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.791353226s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410537720s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869458199s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488822937s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790431023s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409797668s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869435310s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488822937s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790398598s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409797668s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790238380s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409790039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790216446s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409790039s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869569778s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489196777s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.869544983s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489196777s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790086746s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409790039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790070534s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409790039s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868990898s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488792419s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790706635s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.410545349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868967056s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488792419s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.790687561s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.410545349s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789342880s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409431458s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868741989s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488849640s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789314270s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409431458s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868718147s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488849640s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868660927s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488925934s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789170265s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409454346s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.789145470s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409454346s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868627548s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488925934s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868504524s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.488952637s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868486404s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.488952637s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788828850s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409408569s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788810730s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409408569s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788667679s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.409393311s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868463516s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489189148s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.788648605s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.409393311s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868432999s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489189148s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868255615s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 63.489154816s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=8.868234634s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.489154816s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.781010628s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 67.408958435s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=12.780977249s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.408958435s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:20:43 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event c6336473-620e-4396-9742-5356787fe4c2 (Global Recovery Event) in 10 seconds
Dec  3 01:20:43 compute-0 podman[216313]: 2025-12-03 01:20:43.660676681 +0000 UTC m=+0.148926868 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:20:43 compute-0 podman[216313]: 2025-12-03 01:20:43.774302541 +0000 UTC m=+0.262552718 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Dec  3 01:20:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/246700991' entity='client.admin' 
Dec  3 01:20:43 compute-0 heuristic_hermann[216253]: set ssl_option
Dec  3 01:20:43 compute-0 systemd[1]: libpod-4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6.scope: Deactivated successfully.
Dec  3 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.884076566 +0000 UTC m=+1.010240835 container died 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cefca2c566632f15b3a038fc127486c0469b9ad81d752645e664ef181139b064-merged.mount: Deactivated successfully.
Dec  3 01:20:43 compute-0 podman[216227]: 2025-12-03 01:20:43.97938775 +0000 UTC m=+1.105552009 container remove 4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6 (image=quay.io/ceph/ceph:v18, name=heuristic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:20:43 compute-0 systemd[1]: libpod-conmon-4ed5f45987c55b371b284f2f8fcfb3a8d38b4907e1223851d84eb294346d71f6.scope: Deactivated successfully.
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  3 01:20:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  3 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:20:44 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/246700991' entity='client.admin' 
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.1f( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.17( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1c( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1d( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.13( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.11( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.15( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.14( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.f( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=33/20 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=37/28 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.4( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.6( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=33/18 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.c( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[6.1e( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=35/24 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:20:44 compute-0 python3[216446]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.508412312 +0000 UTC m=+0.081656858 container create dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:20:44 compute-0 systemd[1]: Started libpod-conmon-dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30.scope.
Dec  3 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.480732177 +0000 UTC m=+0.053976713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.644483944 +0000 UTC m=+0.217728510 container init dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6df04731-ee4e-45c6-a84d-11b50089b0f9 does not exist
Dec  3 01:20:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2d8b3d8f-b81a-4020-95da-dcda1fa2f3ec does not exist
Dec  3 01:20:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8977d211-6977-4840-8bc6-93500eed2775 does not exist
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.653231845 +0000 UTC m=+0.226476381 container start dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:44 compute-0 podman[216471]: 2025-12-03 01:20:44.657759691 +0000 UTC m=+0.231004257 container attach dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:44 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  3 01:20:44 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  3 01:20:44 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec  3 01:20:44 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec  3 01:20:45 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:20:45 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec  3 01:20:45 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  3 01:20:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 01:20:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:45 compute-0 nervous_shannon[216499]: Scheduled rgw.rgw update...
Dec  3 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:45 compute-0 systemd[1]: libpod-dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30.scope: Deactivated successfully.
Dec  3 01:20:45 compute-0 podman[216471]: 2025-12-03 01:20:45.252352885 +0000 UTC m=+0.825597461 container died dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:20:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0db5a9aac97be4d6b1331c4d10536f16bab95fbf20039dab67bd8a732ed4d9-merged.mount: Deactivated successfully.
Dec  3 01:20:45 compute-0 podman[216471]: 2025-12-03 01:20:45.346074086 +0000 UTC m=+0.919318662 container remove dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30 (image=quay.io/ceph/ceph:v18, name=nervous_shannon, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:45 compute-0 systemd[1]: libpod-conmon-dfd4f67ab4b07f0cf2527dacb2075e71937b2cb4ddba09543ba4ffe0d74cfd30.scope: Deactivated successfully.
Dec  3 01:20:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  3 01:20:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  3 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.778561969 +0000 UTC m=+0.066277753 container create c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:20:45 compute-0 systemd[1]: Started libpod-conmon-c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458.scope.
Dec  3 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.757403564 +0000 UTC m=+0.045119388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.901131367 +0000 UTC m=+0.188847211 container init c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.917476689 +0000 UTC m=+0.205192493 container start c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.924007919 +0000 UTC m=+0.211723743 container attach c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:45 compute-0 relaxed_murdock[216690]: 167 167
Dec  3 01:20:45 compute-0 systemd[1]: libpod-c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458.scope: Deactivated successfully.
Dec  3 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.92622191 +0000 UTC m=+0.213937694 container died c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2547c6140542ca90a2c8f66aff2616a58465208cfabe0f04c0527164914d9a48-merged.mount: Deactivated successfully.
Dec  3 01:20:45 compute-0 podman[216675]: 2025-12-03 01:20:45.994663402 +0000 UTC m=+0.282379186 container remove c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:20:46 compute-0 systemd[1]: libpod-conmon-c456a975fca70027cf66babe6be692410584505e0f6fc305a40a8367c8075458.scope: Deactivated successfully.
Dec  3 01:20:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:46 compute-0 ceph-mon[192821]: Saving service rgw.rgw spec with placement compute-0
Dec  3 01:20:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.278820846 +0000 UTC m=+0.093419953 container create eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.229172934 +0000 UTC m=+0.043772111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:46 compute-0 systemd[1]: Started libpod-conmon-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope.
Dec  3 01:20:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.42476184 +0000 UTC m=+0.239360977 container init eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.439453906 +0000 UTC m=+0.254053033 container start eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:20:46 compute-0 podman[216731]: 2025-12-03 01:20:46.446284085 +0000 UTC m=+0.260883272 container attach eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:20:46 compute-0 podman[216770]: 2025-12-03 01:20:46.476211892 +0000 UTC m=+0.126465536 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:20:46 compute-0 python3[216830]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:20:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  3 01:20:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  3 01:20:47 compute-0 python3[216901]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724846.2006643-37177-150490665735229/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:20:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec  3 01:20:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec  3 01:20:47 compute-0 friendly_kirch[216786]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:20:47 compute-0 friendly_kirch[216786]: --> relative data size: 1.0
Dec  3 01:20:47 compute-0 friendly_kirch[216786]: --> All data devices are unavailable
Dec  3 01:20:47 compute-0 systemd[1]: libpod-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope: Deactivated successfully.
Dec  3 01:20:47 compute-0 podman[216731]: 2025-12-03 01:20:47.790172521 +0000 UTC m=+1.604771628 container died eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:47 compute-0 systemd[1]: libpod-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope: Consumed 1.291s CPU time.
Dec  3 01:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-640bac675afb6769cd5f9f5b1ad6030b258b008876817d42f20307910f1bb009-merged.mount: Deactivated successfully.
Dec  3 01:20:47 compute-0 python3[216975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:47 compute-0 podman[216731]: 2025-12-03 01:20:47.908482291 +0000 UTC m=+1.723081398 container remove eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kirch, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 01:20:47 compute-0 systemd[1]: libpod-conmon-eb802bf4a15bf0bcb465cad5ffe30945a4498b9097b06ccb9dc14d222d9ab7a9.scope: Deactivated successfully.
Dec  3 01:20:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  3 01:20:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  3 01:20:47 compute-0 podman[216988]: 2025-12-03 01:20:47.994453187 +0000 UTC m=+0.063759313 container create e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:20:48 compute-0 systemd[1]: Started libpod-conmon-e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c.scope.
Dec  3 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:47.97319461 +0000 UTC m=+0.042500696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.142609142 +0000 UTC m=+0.211915238 container init e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  3 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.152716052 +0000 UTC m=+0.222022138 container start e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.156766884 +0000 UTC m=+0.226072970 container attach e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 36 peering, 157 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:48 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 10 completed events
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  3 01:20:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  3 01:20:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  3 01:20:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  3 01:20:48 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:20:48 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  3 01:20:48 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0[192817]: 2025-12-03T01:20:48.773+0000 7ff315e65640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e2 new map
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T01:20:48.773680+0000#012modified#0112025-12-03T01:20:48.773725+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  3 01:20:48 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  3 01:20:48 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  3 01:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 01:20:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:48 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  3 01:20:48 compute-0 systemd[1]: libpod-e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c.scope: Deactivated successfully.
Dec  3 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.845200602 +0000 UTC m=+0.914506728 container died e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-531d51b1524b1958dd8bd48023675f7cea1c8c98787a856b4b3611e2c34efa8e-merged.mount: Deactivated successfully.
Dec  3 01:20:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  3 01:20:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  3 01:20:48 compute-0 podman[216988]: 2025-12-03 01:20:48.955393538 +0000 UTC m=+1.024699634 container remove e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c (image=quay.io/ceph/ceph:v18, name=gracious_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 01:20:48 compute-0 systemd[1]: libpod-conmon-e6ebac011e4996a3a19a68fdf2a07d5365eb7fbdcba1b1fdaffb1e69e2228e0c.scope: Deactivated successfully.
Dec  3 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.126720083 +0000 UTC m=+0.084570418 container create ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.09659656 +0000 UTC m=+0.054446955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:49 compute-0 systemd[1]: Started libpod-conmon-ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7.scope.
Dec  3 01:20:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.267396561 +0000 UTC m=+0.225246936 container init ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.284113403 +0000 UTC m=+0.241963688 container start ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.289281786 +0000 UTC m=+0.247132171 container attach ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:20:49 compute-0 elastic_ardinghelli[217215]: 167 167
Dec  3 01:20:49 compute-0 systemd[1]: libpod-ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7.scope: Deactivated successfully.
Dec  3 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.292656199 +0000 UTC m=+0.250506574 container died ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-98baa9ce423971f568ff3af76f932c2149b3db26facba9579012b9bb9c0eced0-merged.mount: Deactivated successfully.
Dec  3 01:20:49 compute-0 podman[217175]: 2025-12-03 01:20:49.374137521 +0000 UTC m=+0.331987826 container remove ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:20:49 compute-0 python3[217219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:49 compute-0 systemd[1]: libpod-conmon-ca1a5a2c80b7ee69a37503df0b800eb2cc696d8280658e1ee80ebd463753c3b7.scope: Deactivated successfully.
Dec  3 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  3 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  3 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  3 01:20:49 compute-0 ceph-mon[192821]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 01:20:49 compute-0 ceph-mon[192821]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  3 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  3 01:20:49 compute-0 ceph-mon[192821]: Saving service mds.cephfs spec with placement compute-0
Dec  3 01:20:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.490007994 +0000 UTC m=+0.073595605 container create bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.46126148 +0000 UTC m=+0.044849161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:49 compute-0 systemd[1]: Started libpod-conmon-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope.
Dec  3 01:20:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.668104557 +0000 UTC m=+0.251692198 container init bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.685658672 +0000 UTC m=+0.269246283 container start bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.693338774 +0000 UTC m=+0.095452429 container create 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:49 compute-0 podman[217236]: 2025-12-03 01:20:49.698556328 +0000 UTC m=+0.282143939 container attach bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec  3 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.654442219 +0000 UTC m=+0.056555894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:49 compute-0 systemd[1]: Started libpod-conmon-1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341.scope.
Dec  3 01:20:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.881853065 +0000 UTC m=+0.283966730 container init 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.902196787 +0000 UTC m=+0.304310452 container start 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:49 compute-0 podman[217261]: 2025-12-03 01:20:49.908199163 +0000 UTC m=+0.310312818 container attach 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  3 01:20:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  3 01:20:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:50 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 01:20:50 compute-0 ceph-mgr[193109]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  3 01:20:50 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  3 01:20:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 01:20:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:50 compute-0 focused_jepsen[217259]: Scheduled mds.cephfs update...
Dec  3 01:20:50 compute-0 systemd[1]: libpod-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope: Deactivated successfully.
Dec  3 01:20:50 compute-0 conmon[217259]: conmon bcbbd9cc8de050deae2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope/container/memory.events
Dec  3 01:20:50 compute-0 podman[217236]: 2025-12-03 01:20:50.36286561 +0000 UTC m=+0.946453241 container died bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11737a3222362a10b18a13558d00739660e19f90c4ea3a6aa28c6c235a1c61b-merged.mount: Deactivated successfully.
Dec  3 01:20:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:50 compute-0 podman[217236]: 2025-12-03 01:20:50.459919333 +0000 UTC m=+1.043506954 container remove bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226 (image=quay.io/ceph/ceph:v18, name=focused_jepsen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:50 compute-0 systemd[1]: libpod-conmon-bcbbd9cc8de050deae2f1fffc7a6f91b38791673ff226df86e462923e4b2e226.scope: Deactivated successfully.
Dec  3 01:20:50 compute-0 magical_jepsen[217278]: {
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:    "0": [
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:        {
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "devices": [
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "/dev/loop3"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            ],
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_name": "ceph_lv0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_size": "21470642176",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "name": "ceph_lv0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "tags": {
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.crush_device_class": "",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.encrypted": "0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osd_id": "0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.type": "block",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.vdo": "0"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            },
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "type": "block",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "vg_name": "ceph_vg0"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:        }
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:    ],
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:    "1": [
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:        {
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "devices": [
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "/dev/loop4"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            ],
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_name": "ceph_lv1",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_size": "21470642176",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "name": "ceph_lv1",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "tags": {
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.crush_device_class": "",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.encrypted": "0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osd_id": "1",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.type": "block",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.vdo": "0"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            },
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "type": "block",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "vg_name": "ceph_vg1"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:        }
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:    ],
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:    "2": [
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:        {
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "devices": [
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "/dev/loop5"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            ],
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_name": "ceph_lv2",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_size": "21470642176",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "name": "ceph_lv2",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "tags": {
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.cluster_name": "ceph",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.crush_device_class": "",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.encrypted": "0",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osd_id": "2",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.type": "block",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:                "ceph.vdo": "0"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            },
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "type": "block",
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:            "vg_name": "ceph_vg2"
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:        }
Dec  3 01:20:50 compute-0 magical_jepsen[217278]:    ]
Dec  3 01:20:50 compute-0 magical_jepsen[217278]: }
Dec  3 01:20:50 compute-0 systemd[1]: libpod-1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341.scope: Deactivated successfully.
Dec  3 01:20:50 compute-0 podman[217261]: 2025-12-03 01:20:50.754078134 +0000 UTC m=+1.156191799 container died 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fee8e292cc305b79df03174f006d7cf77a6497c3981b9ebab33178a65ad88155-merged.mount: Deactivated successfully.
Dec  3 01:20:50 compute-0 podman[217261]: 2025-12-03 01:20:50.863068256 +0000 UTC m=+1.265181921 container remove 1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:20:50 compute-0 systemd[1]: libpod-conmon-1f314677d5c021a33430f7fc17bd7ea3e1b9a7383a9f9f83a1242f40399eb341.scope: Deactivated successfully.
Dec  3 01:20:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Dec  3 01:20:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Dec  3 01:20:51 compute-0 ceph-mon[192821]: Saving service mds.cephfs spec with placement compute-0
Dec  3 01:20:51 compute-0 python3[217464]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 01:20:51 compute-0 podman[217614]: 2025-12-03 01:20:51.971906365 +0000 UTC m=+0.090658197 container create 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:51.932500966 +0000 UTC m=+0.051252878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:52 compute-0 systemd[1]: Started libpod-conmon-6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702.scope.
Dec  3 01:20:52 compute-0 python3[217622]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764724850.9495606-37207-99371674157266/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=085db63d611f66658452414c8f83e35d20a7cbf6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:20:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.095391348 +0000 UTC m=+0.214143250 container init 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.120326877 +0000 UTC m=+0.239078739 container start 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.126904769 +0000 UTC m=+0.245656641 container attach 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:52 compute-0 friendly_volhard[217632]: 167 167
Dec  3 01:20:52 compute-0 systemd[1]: libpod-6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702.scope: Deactivated successfully.
Dec  3 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.130156689 +0000 UTC m=+0.248908551 container died 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd074524eb6d814e2110c9c63379c94f1d1edc751a622185b23b06b097fdee65-merged.mount: Deactivated successfully.
Dec  3 01:20:52 compute-0 podman[217614]: 2025-12-03 01:20:52.203513217 +0000 UTC m=+0.322265089 container remove 6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_volhard, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:52 compute-0 systemd[1]: libpod-conmon-6987009783f5c4c3ae359444271fac5096774aab16d122f0f3f67990fecc9702.scope: Deactivated successfully.
Dec  3 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.431817017 +0000 UTC m=+0.064470423 container create c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:52 compute-0 systemd[1]: Started libpod-conmon-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope.
Dec  3 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.413596314 +0000 UTC m=+0.046249730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.610141466 +0000 UTC m=+0.242794922 container init c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.649361649 +0000 UTC m=+0.282015085 container start c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:20:52 compute-0 podman[217680]: 2025-12-03 01:20:52.656919528 +0000 UTC m=+0.289572984 container attach c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:20:52 compute-0 python3[217725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:52 compute-0 podman[217727]: 2025-12-03 01:20:52.864707061 +0000 UTC m=+0.078408248 container create ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:20:52 compute-0 podman[217727]: 2025-12-03 01:20:52.826073314 +0000 UTC m=+0.039774541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:52 compute-0 systemd[1]: Started libpod-conmon-ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7.scope.
Dec  3 01:20:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec  3 01:20:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec  3 01:20:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af27cbc15dc94e397f4ba4ce99f9766b61a0f48b5b111319c9db06b59ed9212f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af27cbc15dc94e397f4ba4ce99f9766b61a0f48b5b111319c9db06b59ed9212f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.037689023 +0000 UTC m=+0.251390240 container init ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.053988233 +0000 UTC m=+0.267689390 container start ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.058858768 +0000 UTC m=+0.272560005 container attach ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:20:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Dec  3 01:20:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  3 01:20:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  3 01:20:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  3 01:20:53 compute-0 systemd[1]: libpod-ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7.scope: Deactivated successfully.
Dec  3 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.760324117 +0000 UTC m=+0.974025314 container died ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  3 01:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-af27cbc15dc94e397f4ba4ce99f9766b61a0f48b5b111319c9db06b59ed9212f-merged.mount: Deactivated successfully.
Dec  3 01:20:53 compute-0 upbeat_wright[217696]: {
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "osd_id": 2,
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "type": "bluestore"
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:    },
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "osd_id": 1,
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "type": "bluestore"
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:    },
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "osd_id": 0,
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:        "type": "bluestore"
Dec  3 01:20:53 compute-0 upbeat_wright[217696]:    }
Dec  3 01:20:53 compute-0 upbeat_wright[217696]: }
Dec  3 01:20:53 compute-0 podman[217727]: 2025-12-03 01:20:53.854909001 +0000 UTC m=+1.068610188 container remove ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7 (image=quay.io/ceph/ceph:v18, name=mystifying_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:53 compute-0 systemd[1]: libpod-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope: Deactivated successfully.
Dec  3 01:20:53 compute-0 systemd[1]: libpod-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope: Consumed 1.201s CPU time.
Dec  3 01:20:53 compute-0 systemd[1]: libpod-conmon-ccea185fcb59bcde5081bb0bb90bfe3fd587109811d7ca0a2c25f0be89e01bd7.scope: Deactivated successfully.
Dec  3 01:20:53 compute-0 podman[217806]: 2025-12-03 01:20:53.944348103 +0000 UTC m=+0.059498195 container died c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-26bebf424fa60c4d78ff8ac78623c8955b3b0062eead4312b2a14ddae0e9c4ed-merged.mount: Deactivated successfully.
Dec  3 01:20:54 compute-0 podman[217806]: 2025-12-03 01:20:54.045563671 +0000 UTC m=+0.160713693 container remove c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:54 compute-0 systemd[1]: libpod-conmon-c903d71a10e6484bf577a9ca4b7f19877010c36a24cf072ca80905ba4acb5b17.scope: Deactivated successfully.
Dec  3 01:20:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:54 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  3 01:20:54 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2251543419' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  3 01:20:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:54 compute-0 python3[217928]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec  3 01:20:54 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec  3 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.788391433 +0000 UTC m=+0.087659254 container create 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.765064038 +0000 UTC m=+0.064331929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:54 compute-0 systemd[1]: Started libpod-conmon-2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969.scope.
Dec  3 01:20:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa7425ca14dc1e8d72670dedfd0b6eb43bda65f7301c31631c1bc5d16a49d0c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa7425ca14dc1e8d72670dedfd0b6eb43bda65f7301c31631c1bc5d16a49d0c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.939188161 +0000 UTC m=+0.238456052 container init 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.955809401 +0000 UTC m=+0.255077222 container start 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:54 compute-0 podman[217968]: 2025-12-03 01:20:54.960908282 +0000 UTC m=+0.260176123 container attach 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:20:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 01:20:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156561531' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 01:20:55 compute-0 podman[218103]: 2025-12-03 01:20:55.640176327 +0000 UTC m=+0.102540715 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:20:55 compute-0 objective_fermat[218014]: 
Dec  3 01:20:55 compute-0 objective_fermat[218014]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":195,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764724803,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84168704,"bytes_avail":64327757824,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-12-03T01:20:50.203901+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  3 01:20:55 compute-0 systemd[1]: libpod-2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969.scope: Deactivated successfully.
Dec  3 01:20:55 compute-0 podman[217968]: 2025-12-03 01:20:55.662231746 +0000 UTC m=+0.961499567 container died 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:20:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec  3 01:20:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec  3 01:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfa7425ca14dc1e8d72670dedfd0b6eb43bda65f7301c31631c1bc5d16a49d0c-merged.mount: Deactivated successfully.
Dec  3 01:20:55 compute-0 podman[217968]: 2025-12-03 01:20:55.752022218 +0000 UTC m=+1.051290039 container remove 2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969 (image=quay.io/ceph/ceph:v18, name=objective_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 01:20:55 compute-0 systemd[1]: libpod-conmon-2d5da8b52333547ed08227319940ad480b9d8389a41738b6da9bf9511cdbd969.scope: Deactivated successfully.
Dec  3 01:20:55 compute-0 podman[218103]: 2025-12-03 01:20:55.782699086 +0000 UTC m=+0.245063444 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:20:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:56 compute-0 python3[218206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.338672613 +0000 UTC m=+0.071894929 container create fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.303082869 +0000 UTC m=+0.036305225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:56 compute-0 systemd[1]: Started libpod-conmon-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope.
Dec  3 01:20:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7b64c4291c6b4dae54e3528b322c86cd7e9b0614d32745651ea6135a8ba51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7b64c4291c6b4dae54e3528b322c86cd7e9b0614d32745651ea6135a8ba51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.482262131 +0000 UTC m=+0.215484427 container init fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.49306075 +0000 UTC m=+0.226283026 container start fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:56 compute-0 podman[218225]: 2025-12-03 01:20:56.497874093 +0000 UTC m=+0.231096449 container attach fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0744da57-f806-48ae-9324-54022f6b951f does not exist
Dec  3 01:20:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 201b98e6-a573-4b95-9cab-d3a3dae42431 does not exist
Dec  3 01:20:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3edf50b9-a89c-467f-8014-35ac5716ba08 does not exist
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:20:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:20:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 01:20:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927838492' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 01:20:57 compute-0 elastic_pike[218258]: 
Dec  3 01:20:57 compute-0 elastic_pike[218258]: {"epoch":1,"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","modified":"2025-12-03T01:17:32.534037Z","created":"2025-12-03T01:17:32.534037Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec  3 01:20:57 compute-0 elastic_pike[218258]: dumped monmap epoch 1
Dec  3 01:20:57 compute-0 systemd[1]: libpod-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope: Deactivated successfully.
Dec  3 01:20:57 compute-0 conmon[218258]: conmon fcf3365b15bd3338bcaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope/container/memory.events
Dec  3 01:20:57 compute-0 podman[218225]: 2025-12-03 01:20:57.178956728 +0000 UTC m=+0.912179094 container died fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-86f7b64c4291c6b4dae54e3528b322c86cd7e9b0614d32745651ea6135a8ba51-merged.mount: Deactivated successfully.
Dec  3 01:20:57 compute-0 podman[218225]: 2025-12-03 01:20:57.261116559 +0000 UTC m=+0.994338835 container remove fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665 (image=quay.io/ceph/ceph:v18, name=elastic_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  3 01:20:57 compute-0 systemd[1]: libpod-conmon-fcf3365b15bd3338bcaf339156ee602e1a0aaeec2ea387c23404cc71dba87665.scope: Deactivated successfully.
Dec  3 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:20:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:20:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  3 01:20:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  3 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.790856262 +0000 UTC m=+0.115812633 container create 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.760231625 +0000 UTC m=+0.085188086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:57 compute-0 systemd[1]: Started libpod-conmon-271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e.scope.
Dec  3 01:20:57 compute-0 python3[218479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:20:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.946277037 +0000 UTC m=+0.271233448 container init 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.961051596 +0000 UTC m=+0.286007997 container start 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.967695779 +0000 UTC m=+0.292652140 container attach 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:20:57 compute-0 unruffled_bassi[218485]: 167 167
Dec  3 01:20:57 compute-0 systemd[1]: libpod-271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e.scope: Deactivated successfully.
Dec  3 01:20:57 compute-0 podman[218456]: 2025-12-03 01:20:57.972873583 +0000 UTC m=+0.297829964 container died 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c2344a7da3a65aa2dc55f98f13f6a3563c26929695d6ba1dbfa39180b5a85bc-merged.mount: Deactivated successfully.
Dec  3 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.030204867 +0000 UTC m=+0.105088685 container create 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:20:58 compute-0 podman[218456]: 2025-12-03 01:20:58.064964498 +0000 UTC m=+0.389920869 container remove 271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:57.994944513 +0000 UTC m=+0.069828401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:20:58 compute-0 systemd[1]: libpod-conmon-271db199829fef8919c6e7025e969a47aba24fc478900903beebf8d0c1dc720e.scope: Deactivated successfully.
Dec  3 01:20:58 compute-0 systemd[1]: Started libpod-conmon-47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba.scope.
Dec  3 01:20:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47523d188b44566dc19821ca832415a32a314cd14deb1ac54a417cc1798ab69/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47523d188b44566dc19821ca832415a32a314cd14deb1ac54a417cc1798ab69/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.209836252 +0000 UTC m=+0.284720160 container init 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:20:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.221024742 +0000 UTC m=+0.295908560 container start 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.225893386 +0000 UTC m=+0.300777234 container attach 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.306820693 +0000 UTC m=+0.073319868 container create 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.271161967 +0000 UTC m=+0.037661142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:20:58 compute-0 systemd[1]: Started libpod-conmon-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope.
Dec  3 01:20:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.49918144 +0000 UTC m=+0.265680625 container init 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.521841696 +0000 UTC m=+0.288340871 container start 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 01:20:58 compute-0 podman[218526]: 2025-12-03 01:20:58.528022887 +0000 UTC m=+0.294522042 container attach 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Dec  3 01:20:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3838596819' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  3 01:20:58 compute-0 recursing_satoshi[218517]: [client.openstack]
Dec  3 01:20:58 compute-0 recursing_satoshi[218517]: #011key = AQCCjy9pAAAAABAAp+KNKPmL/Q89NduD/bXpeQ==
Dec  3 01:20:58 compute-0 recursing_satoshi[218517]: #011caps mgr = "allow *"
Dec  3 01:20:58 compute-0 recursing_satoshi[218517]: #011caps mon = "profile rbd"
Dec  3 01:20:58 compute-0 recursing_satoshi[218517]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  3 01:20:58 compute-0 systemd[1]: libpod-47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba.scope: Deactivated successfully.
Dec  3 01:20:58 compute-0 podman[218488]: 2025-12-03 01:20:58.9601166 +0000 UTC m=+1.035000448 container died 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f47523d188b44566dc19821ca832415a32a314cd14deb1ac54a417cc1798ab69-merged.mount: Deactivated successfully.
Dec  3 01:20:59 compute-0 podman[218488]: 2025-12-03 01:20:59.050698084 +0000 UTC m=+1.125581932 container remove 47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba (image=quay.io/ceph/ceph:v18, name=recursing_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 01:20:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec  3 01:20:59 compute-0 systemd[1]: libpod-conmon-47f472a55ca1fe3f24186fa8c966ad8fff709eaaa19fa619f3e75d511208bcba.scope: Deactivated successfully.
Dec  3 01:20:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec  3 01:20:59 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3838596819' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  3 01:20:59 compute-0 podman[158098]: time="2025-12-03T01:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30878 "" "Go-http-client/1.1"
Dec  3 01:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6281 "" "Go-http-client/1.1"
Dec  3 01:20:59 compute-0 quirky_wing[218542]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:20:59 compute-0 quirky_wing[218542]: --> relative data size: 1.0
Dec  3 01:20:59 compute-0 quirky_wing[218542]: --> All data devices are unavailable
Dec  3 01:20:59 compute-0 systemd[1]: libpod-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope: Deactivated successfully.
Dec  3 01:20:59 compute-0 podman[218526]: 2025-12-03 01:20:59.8523167 +0000 UTC m=+1.618815885 container died 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:20:59 compute-0 systemd[1]: libpod-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope: Consumed 1.258s CPU time.
Dec  3 01:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d39bf1e69197740a2eae7118d13ba01742c6a034d3625d8759cf549f80345f-merged.mount: Deactivated successfully.
Dec  3 01:20:59 compute-0 podman[218526]: 2025-12-03 01:20:59.972622826 +0000 UTC m=+1.739121981 container remove 1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:20:59 compute-0 systemd[1]: libpod-conmon-1a66b678927623a636f287bb5c32bb0916470c795c4858093029ad0d2ef5498c.scope: Deactivated successfully.
Dec  3 01:21:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:00 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec  3 01:21:00 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec  3 01:21:01 compute-0 ansible-async_wrapper.py[218878]: Invoked with j777914220905 30 /home/zuul/.ansible/tmp/ansible-tmp-1764724860.1459956-37279-206255486704134/AnsiballZ_command.py _
Dec  3 01:21:01 compute-0 ansible-async_wrapper.py[218902]: Starting module and watcher
Dec  3 01:21:01 compute-0 ansible-async_wrapper.py[218902]: Start watching 218905 (30)
Dec  3 01:21:01 compute-0 ansible-async_wrapper.py[218905]: Start module (218905)
Dec  3 01:21:01 compute-0 ansible-async_wrapper.py[218878]: Return async_wrapper task started.
Dec  3 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.190381435 +0000 UTC m=+0.079615732 container create f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:21:01 compute-0 systemd[1]: Started libpod-conmon-f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f.scope.
Dec  3 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.157882317 +0000 UTC m=+0.047116694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:01 compute-0 python3[218906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.303835161 +0000 UTC m=+0.193069478 container init f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.313829077 +0000 UTC m=+0.203063374 container start f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.318516487 +0000 UTC m=+0.207750804 container attach f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:01 compute-0 interesting_almeida[218928]: 167 167
Dec  3 01:21:01 compute-0 systemd[1]: libpod-f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f.scope: Deactivated successfully.
Dec  3 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.320722628 +0000 UTC m=+0.209956935 container died f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c17166b0a46361cdc8c66720f2cf4bbd54997d52bdf1ea0aabfed1d8825cc60-merged.mount: Deactivated successfully.
Dec  3 01:21:01 compute-0 podman[218910]: 2025-12-03 01:21:01.400123092 +0000 UTC m=+0.289357429 container remove f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.41342117 +0000 UTC m=+0.135654341 container create bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.342400297 +0000 UTC m=+0.064633518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:21:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:21:01 compute-0 openstack_network_exporter[160250]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:21:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:21:01 compute-0 systemd[1]: libpod-conmon-f31c11df518317e99f2d93d61acb7e0ea896131c4990b95c44c14d024482de7f.scope: Deactivated successfully.
Dec  3 01:21:01 compute-0 systemd[1]: Started libpod-conmon-bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade.scope.
Dec  3 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e61affc03522beda1f9cbd4ffb4563c9abb5ed0e0f536920effddb13a5c0e18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e61affc03522beda1f9cbd4ffb4563c9abb5ed0e0f536920effddb13a5c0e18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.551302011 +0000 UTC m=+0.273535262 container init bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.562411798 +0000 UTC m=+0.284645009 container start bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:21:01 compute-0 podman[218930]: 2025-12-03 01:21:01.569084623 +0000 UTC m=+0.291317904 container attach bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.663639546 +0000 UTC m=+0.079576920 container create 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:21:01 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  3 01:21:01 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  3 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.634965854 +0000 UTC m=+0.050903228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  3 01:21:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  3 01:21:01 compute-0 systemd[1]: Started libpod-conmon-331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a.scope.
Dec  3 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.857062072 +0000 UTC m=+0.272999446 container init 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.878451744 +0000 UTC m=+0.294389108 container start 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 01:21:01 compute-0 podman[218970]: 2025-12-03 01:21:01.885247311 +0000 UTC m=+0.301184675 container attach 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:21:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 01:21:02 compute-0 happy_albattani[218959]: 
Dec  3 01:21:02 compute-0 happy_albattani[218959]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 01:21:02 compute-0 systemd[1]: libpod-bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade.scope: Deactivated successfully.
Dec  3 01:21:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:02 compute-0 podman[219038]: 2025-12-03 01:21:02.288998991 +0000 UTC m=+0.059629529 container died bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e61affc03522beda1f9cbd4ffb4563c9abb5ed0e0f536920effddb13a5c0e18-merged.mount: Deactivated successfully.
Dec  3 01:21:02 compute-0 podman[219038]: 2025-12-03 01:21:02.361569037 +0000 UTC m=+0.132199525 container remove bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade (image=quay.io/ceph/ceph:v18, name=happy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 01:21:02 compute-0 systemd[1]: libpod-conmon-bf764f366a945d50477a1f631f593d52efea140b003fe03ed9bdf965034b3ade.scope: Deactivated successfully.
Dec  3 01:21:02 compute-0 podman[219037]: 2025-12-03 01:21:02.393345966 +0000 UTC m=+0.134167460 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:21:02 compute-0 ansible-async_wrapper.py[218905]: Module complete (218905)
Dec  3 01:21:02 compute-0 podman[219041]: 2025-12-03 01:21:02.426288046 +0000 UTC m=+0.178575437 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec  3 01:21:02 compute-0 podman[219047]: 2025-12-03 01:21:02.427899461 +0000 UTC m=+0.170404351 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec  3 01:21:02 compute-0 podman[219043]: 2025-12-03 01:21:02.440668224 +0000 UTC m=+0.178839335 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 01:21:02 compute-0 python3[219117]: ansible-ansible.legacy.async_status Invoked with jid=j777914220905.218878 mode=status _async_dir=/root/.ansible_async
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]: {
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:    "0": [
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:        {
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "devices": [
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "/dev/loop3"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            ],
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_name": "ceph_lv0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_size": "21470642176",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "name": "ceph_lv0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "tags": {
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.crush_device_class": "",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.encrypted": "0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osd_id": "0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.type": "block",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.vdo": "0"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            },
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "type": "block",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "vg_name": "ceph_vg0"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:        }
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:    ],
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:    "1": [
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:        {
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "devices": [
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "/dev/loop4"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            ],
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_name": "ceph_lv1",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_size": "21470642176",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "name": "ceph_lv1",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "tags": {
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.crush_device_class": "",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.encrypted": "0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osd_id": "1",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.type": "block",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.vdo": "0"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            },
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "type": "block",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "vg_name": "ceph_vg1"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:        }
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:    ],
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:    "2": [
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:        {
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "devices": [
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "/dev/loop5"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            ],
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_name": "ceph_lv2",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_size": "21470642176",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "name": "ceph_lv2",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "tags": {
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.crush_device_class": "",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.encrypted": "0",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osd_id": "2",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.type": "block",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:                "ceph.vdo": "0"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            },
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "type": "block",
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:            "vg_name": "ceph_vg2"
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:        }
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]:    ]
Dec  3 01:21:02 compute-0 heuristic_wozniak[218985]: }
Dec  3 01:21:02 compute-0 systemd[1]: libpod-331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a.scope: Deactivated successfully.
Dec  3 01:21:02 compute-0 podman[218970]: 2025-12-03 01:21:02.710648216 +0000 UTC m=+1.126585560 container died 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Dec  3 01:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a58bb7a6f14446725388e4a4b3405ce2a506838886ddbbd51c4e1d9fa95ce287-merged.mount: Deactivated successfully.
Dec  3 01:21:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Dec  3 01:21:02 compute-0 podman[218970]: 2025-12-03 01:21:02.777562435 +0000 UTC m=+1.193499769 container remove 331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wozniak, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:02 compute-0 systemd[1]: libpod-conmon-331c0e4b365aaf300a12c69dbf02941437e1ebed51bf563dc59116fed544f30a.scope: Deactivated successfully.
Dec  3 01:21:02 compute-0 python3[219214]: ansible-ansible.legacy.async_status Invoked with jid=j777914220905.218878 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 01:21:03 compute-0 python3[219352]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  3 01:21:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  3 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.779885459 +0000 UTC m=+0.110785143 container create c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.746234849 +0000 UTC m=+0.077134593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:03 compute-0 systemd[1]: Started libpod-conmon-c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d.scope.
Dec  3 01:21:03 compute-0 podman[219393]: 2025-12-03 01:21:03.901294835 +0000 UTC m=+0.088588050 container create 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 01:21:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d0499ecf6536d78ebd4778909315f3092ea7d1506b97a120e65532b254ff34/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d0499ecf6536d78ebd4778909315f3092ea7d1506b97a120e65532b254ff34/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.958708782 +0000 UTC m=+0.289608476 container init c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:03 compute-0 podman[219393]: 2025-12-03 01:21:03.869238319 +0000 UTC m=+0.056531514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.974320593 +0000 UTC m=+0.305220247 container start c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:21:03 compute-0 podman[219370]: 2025-12-03 01:21:03.978938011 +0000 UTC m=+0.309837675 container attach c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:21:03 compute-0 systemd[1]: Started libpod-conmon-3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2.scope.
Dec  3 01:21:04 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec  3 01:21:04 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec  3 01:21:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.064300141 +0000 UTC m=+0.251593396 container init 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.080787076 +0000 UTC m=+0.268080281 container start 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:04 compute-0 wonderful_lamarr[219414]: 167 167
Dec  3 01:21:04 compute-0 systemd[1]: libpod-3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2.scope: Deactivated successfully.
Dec  3 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.090068513 +0000 UTC m=+0.277361738 container attach 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.091100301 +0000 UTC m=+0.278393516 container died 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-de9fa81dfcefa1ea5af6cac1913a05e1fed2177f1a0ad3a25251b3fa06f1f154-merged.mount: Deactivated successfully.
Dec  3 01:21:04 compute-0 podman[219393]: 2025-12-03 01:21:04.163633706 +0000 UTC m=+0.350926911 container remove 3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:04 compute-0 systemd[1]: libpod-conmon-3063bb03dfb01272cfa072c7809973283be3be9e99bd7973063c3d5f98c70fe2.scope: Deactivated successfully.
Dec  3 01:21:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.446182396 +0000 UTC m=+0.083830188 container create 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.41629577 +0000 UTC m=+0.053943572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:04 compute-0 systemd[1]: Started libpod-conmon-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope.
Dec  3 01:21:04 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 01:21:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:04 compute-0 gallant_albattani[219408]: 
Dec  3 01:21:04 compute-0 gallant_albattani[219408]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:04 compute-0 systemd[1]: libpod-c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d.scope: Deactivated successfully.
Dec  3 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.604823731 +0000 UTC m=+0.242471493 container init 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:04 compute-0 podman[219370]: 2025-12-03 01:21:04.615614749 +0000 UTC m=+0.946514403 container died c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.618995553 +0000 UTC m=+0.256643335 container start 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 01:21:04 compute-0 podman[219456]: 2025-12-03 01:21:04.635286803 +0000 UTC m=+0.272934575 container attach 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-73d0499ecf6536d78ebd4778909315f3092ea7d1506b97a120e65532b254ff34-merged.mount: Deactivated successfully.
Dec  3 01:21:04 compute-0 podman[219370]: 2025-12-03 01:21:04.696048482 +0000 UTC m=+1.026948166 container remove c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d (image=quay.io/ceph/ceph:v18, name=gallant_albattani, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:21:04 compute-0 systemd[1]: libpod-conmon-c438ca28a8e4ec31835898bd7b4d266406d7adf6d79821a703c1137398e96d1d.scope: Deactivated successfully.
Dec  3 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:05 compute-0 python3[219531]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  3 01:21:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  3 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.767020504 +0000 UTC m=+0.089707450 container create 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:05 compute-0 quirky_pascal[219472]: {
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "osd_id": 2,
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "type": "bluestore"
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:    },
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "osd_id": 1,
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "type": "bluestore"
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:    },
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "osd_id": 0,
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:        "type": "bluestore"
Dec  3 01:21:05 compute-0 quirky_pascal[219472]:    }
Dec  3 01:21:05 compute-0 quirky_pascal[219472]: }
Dec  3 01:21:05 compute-0 podman[219456]: 2025-12-03 01:21:05.820427681 +0000 UTC m=+1.458075443 container died 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:21:05 compute-0 systemd[1]: Started libpod-conmon-995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587.scope.
Dec  3 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.734366512 +0000 UTC m=+0.057053528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:05 compute-0 systemd[1]: libpod-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope: Deactivated successfully.
Dec  3 01:21:05 compute-0 systemd[1]: libpod-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope: Consumed 1.187s CPU time.
Dec  3 01:21:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9d5d7d8a12fbd2c25110365991bc9112d671ac8cf734a64a4ca6748a13fffa1-merged.mount: Deactivated successfully.
Dec  3 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d490fbb1191314f8999713e9230657963a6b902a8c7fdfce38a6ee77188c7e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d490fbb1191314f8999713e9230657963a6b902a8c7fdfce38a6ee77188c7e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:05 compute-0 podman[219456]: 2025-12-03 01:21:05.884948354 +0000 UTC m=+1.522596106 container remove 43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:21:05 compute-0 systemd[1]: libpod-conmon-43c946cb74afa1b8096abb4d9203405745768081c65319344a77d05bac979aea.scope: Deactivated successfully.
Dec  3 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.902598002 +0000 UTC m=+0.225284958 container init 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.911495018 +0000 UTC m=+0.234181954 container start 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:05 compute-0 podman[219540]: 2025-12-03 01:21:05.915776496 +0000 UTC m=+0.238463442 container attach 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:05 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev f22a4605-e838-4b3e-916f-c2f3f4566817 (Updating rgw.rgw deployment (+1 -> 1))
Dec  3 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec  3 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  3 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  3 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec  3 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:21:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:21:05 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.rxmili on compute-0
Dec  3 01:21:05 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.rxmili on compute-0
Dec  3 01:21:06 compute-0 ansible-async_wrapper.py[218902]: Done in kid B.
Dec  3 01:21:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:06 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 01:21:06 compute-0 quizzical_colden[219561]: 
Dec  3 01:21:06 compute-0 quizzical_colden[219561]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  3 01:21:06 compute-0 systemd[1]: libpod-995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587.scope: Deactivated successfully.
Dec  3 01:21:06 compute-0 podman[219540]: 2025-12-03 01:21:06.530691793 +0000 UTC m=+0.853378769 container died 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d490fbb1191314f8999713e9230657963a6b902a8c7fdfce38a6ee77188c7e9-merged.mount: Deactivated successfully.
Dec  3 01:21:06 compute-0 podman[219540]: 2025-12-03 01:21:06.627779216 +0000 UTC m=+0.950466152 container remove 995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587 (image=quay.io/ceph/ceph:v18, name=quizzical_colden, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:06 compute-0 systemd[1]: libpod-conmon-995a6a59de54d9c224a5328453037076d8f167a57f20540a5a2d32ac694be587.scope: Deactivated successfully.
Dec  3 01:21:06 compute-0 podman[219742]: 2025-12-03 01:21:06.872405738 +0000 UTC m=+0.071890828 container create 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:06 compute-0 podman[219742]: 2025-12-03 01:21:06.841706659 +0000 UTC m=+0.041191829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:06 compute-0 systemd[1]: Started libpod-conmon-34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288.scope.
Dec  3 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  3 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rxmili", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  3 01:21:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:06 compute-0 ceph-mon[192821]: Deploying daemon rgw.rgw.compute-0.rxmili on compute-0
Dec  3 01:21:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.004435746 +0000 UTC m=+0.203920866 container init 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.018941537 +0000 UTC m=+0.218426627 container start 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.022846735 +0000 UTC m=+0.222331905 container attach 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:07 compute-0 sharp_perlman[219757]: 167 167
Dec  3 01:21:07 compute-0 systemd[1]: libpod-34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288.scope: Deactivated successfully.
Dec  3 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.026391003 +0000 UTC m=+0.225876093 container died 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec  3 01:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6e64bdd9564afa8444d2ebf6f93bf325483c21ffd24d55b2c1c6809555345c0-merged.mount: Deactivated successfully.
Dec  3 01:21:07 compute-0 podman[219742]: 2025-12-03 01:21:07.090889695 +0000 UTC m=+0.290374825 container remove 34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:21:07 compute-0 systemd[1]: libpod-conmon-34ba4deff48e09031739b71505d15b4156fa54f76aac5cafe16a5d5237dc9288.scope: Deactivated successfully.
Dec  3 01:21:07 compute-0 systemd[1]: Reloading.
Dec  3 01:21:07 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:21:07 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:21:07 compute-0 systemd[1]: Reloading.
Dec  3 01:21:07 compute-0 podman[219836]: 2025-12-03 01:21:07.731091271 +0000 UTC m=+0.108068768 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 01:21:07 compute-0 python3[219839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:07 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:21:07 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:21:07 compute-0 podman[219867]: 2025-12-03 01:21:07.869226349 +0000 UTC m=+0.059429194 container create 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:21:07 compute-0 podman[219867]: 2025-12-03 01:21:07.845944595 +0000 UTC m=+0.036147460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:08 compute-0 systemd[1]: Started libpod-conmon-0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3.scope.
Dec  3 01:21:08 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.rxmili for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:21:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9484d7af0da6cc3db48178dfbeb2744ba8b450a7a63c3cafcaf8df0b7320b03a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9484d7af0da6cc3db48178dfbeb2744ba8b450a7a63c3cafcaf8df0b7320b03a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.2339363 +0000 UTC m=+0.424139225 container init 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.251968598 +0000 UTC m=+0.442171473 container start 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.259081405 +0000 UTC m=+0.449284280 container attach 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.623321322 +0000 UTC m=+0.069137501 container create 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 01:21:08 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  3 01:21:08 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  3 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.598800575 +0000 UTC m=+0.044616834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ec27f5b25a1fff7dbf0661716bfab7c465e3e6b20fed26c285fffb7a2eb5e0/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.rxmili supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.721503776 +0000 UTC m=+0.167320005 container init 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:21:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  3 01:21:08 compute-0 podman[219959]: 2025-12-03 01:21:08.742589159 +0000 UTC m=+0.188405378 container start 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:21:08 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  3 01:21:08 compute-0 bash[219959]: 0a80651d9ff686c169908448ddf808b1c557b63a6a28c6818273c2bf61849243
Dec  3 01:21:08 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.rxmili for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev f22a4605-e838-4b3e-916f-c2f3f4566817 (Updating rgw.rgw deployment (+1 -> 1))
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event f22a4605-e838-4b3e-916f-c2f3f4566817 (Updating rgw.rgw deployment (+1 -> 1)) in 3 seconds
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec  3 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  3 01:21:08 compute-0 radosgw[219997]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:21:08 compute-0 radosgw[219997]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec  3 01:21:08 compute-0 radosgw[219997]: framework: beast
Dec  3 01:21:08 compute-0 radosgw[219997]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  3 01:21:08 compute-0 radosgw[219997]: init_numa not setting numa affinity
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 2fc1d00e-bb55-41de-90e2-200846901dc1 (Updating mds.cephfs deployment (+1 -> 1))
Dec  3 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  3 01:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:21:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bgmlsq on compute-0
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bgmlsq on compute-0
Dec  3 01:21:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 01:21:08 compute-0 unruffled_babbage[219912]: 
Dec  3 01:21:08 compute-0 unruffled_babbage[219912]: [{"container_id": "d1d072b9d136", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.41%", "created": "2025-12-03T01:19:06.601363Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-03T01:19:06.663617Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566575Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-12-03T01:19:06.334103Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@crash.compute-0", "version": "18.2.7"}, {"container_id": "b81e9a342791", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "25.65%", "created": "2025-12-03T01:17:43.571058Z", "daemon_id": "compute-0.rysove", "daemon_name": "mgr.compute-0.rysove", "daemon_type": "mgr", "events": ["2025-12-03T01:20:18.292907Z daemon:mgr.compute-0.rysove [INFO] \"Reconfigured mgr.compute-0.rysove on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566440Z", "memory_usage": 549139251, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-03T01:17:43.394166Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mgr.compute-0.rysove", "version": "18.2.7"}, {"container_id": "d4928ec355dd", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "3.23%", "created": "2025-12-03T01:17:35.963872Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-03T01:20:17.187496Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566301Z", "memory_request": 2147483648, "memory_usage": 41030778, "ports": [], "service_name": "mon", "started": "2025-12-03T01:17:39.952128Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@mon.compute-0", "version": "18.2.7"}, {"container_id": "42c5471d35c5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.89%", "created": "2025-12-03T01:19:40.591235Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-03T01:19:40.670029Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566666Z", "memory_request": 4294967296, "memory_usage": 67496837, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T01:19:40.340364Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@osd.0", "version": "18.2.7"}, {"container_id": "a464c63d7c32", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.11%", "created": "2025-12-03T01:19:47.558414Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-03T01:19:47.656817Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566751Z", "memory_request": 4294967296, "memory_usage": 66815262, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T01:19:47.298414Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@osd.1", "version": "18.2.7"}, {"container_id": "8463edd2b7db", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.10%", "created": "2025-12-03T01:19:54.374589Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-03T01:19:54.434839Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T01:20:56.566833Z", "memory_request": 4294967296, "memory_usage": 66112716, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T01:19:54.176860Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.rxmili", "daemon_name": "rgw.rgw.compute-0.rxmili", "daemon_type": "rgw", "events": ["2025-12-03T01:21:08.855308Z daemon:rgw.rgw.compute-0.rxmili [INFO] \"Deployed rgw.rgw.compute-0.rxmili on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Dec  3 01:21:08 compute-0 systemd[1]: libpod-0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3.scope: Deactivated successfully.
Dec  3 01:21:08 compute-0 podman[219867]: 2025-12-03 01:21:08.996933409 +0000 UTC m=+1.187136284 container died 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9484d7af0da6cc3db48178dfbeb2744ba8b450a7a63c3cafcaf8df0b7320b03a-merged.mount: Deactivated successfully.
Dec  3 01:21:09 compute-0 podman[219867]: 2025-12-03 01:21:09.082820133 +0000 UTC m=+1.273022978 container remove 0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3 (image=quay.io/ceph/ceph:v18, name=unruffled_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:21:09 compute-0 systemd[1]: libpod-conmon-0434529f9c3963875cc0f68c3b0334bead9bcdfba5ebae95c3b9944eabccb7d3.scope: Deactivated successfully.
Dec  3 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:09 compute-0 ceph-mon[192821]: Saving service rgw.rgw spec with placement compute-0
Dec  3 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  3 01:21:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgmlsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  3 01:21:09 compute-0 ceph-mon[192821]: Deploying daemon mds.cephfs.compute-0.bgmlsq on compute-0
Dec  3 01:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  3 01:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  3 01:21:09 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  3 01:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec  3 01:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  3 01:21:09 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec  3 01:21:09 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec  3 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.009402514 +0000 UTC m=+0.074363136 container create cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:10 compute-0 systemd[1]: Started libpod-conmon-cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c.scope.
Dec  3 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:09.988936498 +0000 UTC m=+0.053897140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.126209053 +0000 UTC m=+0.191169705 container init cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.136335393 +0000 UTC m=+0.201296025 container start cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.141695971 +0000 UTC m=+0.206656603 container attach cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:21:10 compute-0 cranky_satoshi[220253]: 167 167
Dec  3 01:21:10 compute-0 systemd[1]: libpod-cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c.scope: Deactivated successfully.
Dec  3 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.144719734 +0000 UTC m=+0.209680366 container died cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-555624ac1bd436b118a4efec88a7508b72f6ff2044428392fbe07d5b4a46a58d-merged.mount: Deactivated successfully.
Dec  3 01:21:10 compute-0 podman[220211]: 2025-12-03 01:21:10.198732217 +0000 UTC m=+0.263692819 container remove cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_satoshi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:21:10 compute-0 python3[220252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:10 compute-0 systemd[1]: libpod-conmon-cd58bb294ea3ed7406e470cd3e45e02d36edd902943dacb083757ab6a1cf1e5c.scope: Deactivated successfully.
Dec  3 01:21:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v115: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:10 compute-0 systemd[1]: Reloading.
Dec  3 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.284045095 +0000 UTC m=+0.056721349 container create 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.265115412 +0000 UTC m=+0.037791696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:21:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:21:10 compute-0 systemd[1]: Started libpod-conmon-62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f.scope.
Dec  3 01:21:10 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da662e3b693213a5bfd44acc6e7f4b595097da56dfd958b6a0f6205832bd8a61/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da662e3b693213a5bfd44acc6e7f4b595097da56dfd958b6a0f6205832bd8a61/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:10 compute-0 systemd[1]: Reloading.
Dec  3 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.823788343 +0000 UTC m=+0.596464607 container init 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.839635551 +0000 UTC m=+0.612311795 container start 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:21:10 compute-0 podman[220270]: 2025-12-03 01:21:10.843784036 +0000 UTC m=+0.616460280 container attach 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 01:21:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:21:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  3 01:21:10 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  3 01:21:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  3 01:21:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  3 01:21:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  3 01:21:10 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 44 pg[8.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:21:11 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:21:11 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.bgmlsq for 3765feb2-36f8-5b86-b74c-64e9221f9c4c...
Dec  3 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  3 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935809746' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  3 01:21:11 compute-0 serene_galois[220326]: 
Dec  3 01:21:11 compute-0 serene_galois[220326]: {"fsid":"3765feb2-36f8-5b86-b74c-64e9221f9c4c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false},"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":211,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1764724803,"num_in_osds":3,"osd_in_since":1764724766,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"unknown","count":1}],"num_pgs":194,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84156416,"bytes_avail":64327770112,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051546390168368816},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-03T01:21:04.220965+0000","services":{}},"progress_events":{"2fc1d00e-bb55-41de-90e2-200846901dc1":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  3 01:21:11 compute-0 systemd[1]: libpod-62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f.scope: Deactivated successfully.
Dec  3 01:21:11 compute-0 podman[220270]: 2025-12-03 01:21:11.581825995 +0000 UTC m=+1.354502279 container died 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-da662e3b693213a5bfd44acc6e7f4b595097da56dfd958b6a0f6205832bd8a61-merged.mount: Deactivated successfully.
Dec  3 01:21:11 compute-0 podman[220270]: 2025-12-03 01:21:11.670289611 +0000 UTC m=+1.442965855 container remove 62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f (image=quay.io/ceph/ceph:v18, name=serene_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:21:11 compute-0 systemd[1]: libpod-conmon-62aeb9beeaffdf698fefe4bb89716f3ab504c41ec47dd8f8b88e830e9a30c85f.scope: Deactivated successfully.
Dec  3 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.745281503 +0000 UTC m=+0.063770103 container create 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:11 compute-0 podman[220435]: 2025-12-03 01:21:11.771741655 +0000 UTC m=+0.136914136 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec  3 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.719886231 +0000 UTC m=+0.038374851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf55e30530ffacd6638fe2d802aa6bf57318793b1d18c99661a35eb7f8593ff/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bgmlsq supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.872982893 +0000 UTC m=+0.191471503 container init 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:21:11 compute-0 podman[220455]: 2025-12-03 01:21:11.891287759 +0000 UTC m=+0.209776399 container start 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:11 compute-0 bash[220455]: 1c1d1d808cb06f16e54581921c03dc9d7a5982264b59cca441ed0cf076fdeef5
Dec  3 01:21:11 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.bgmlsq for 3765feb2-36f8-5b86-b74c-64e9221f9c4c.
Dec  3 01:21:11 compute-0 ceph-mon[192821]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  3 01:21:11 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  3 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  3 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  3 01:21:11 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  3 01:21:11 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec  3 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  3 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:21:11 compute-0 ceph-mds[220488]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:21:11 compute-0 ceph-mds[220488]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec  3 01:21:11 compute-0 ceph-mds[220488]: main not setting numa affinity
Dec  3 01:21:11 compute-0 ceph-mds[220488]: pidfile_write: ignore empty --pid-file
Dec  3 01:21:11 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq[220484]: starting mds.cephfs.compute-0.bgmlsq at 
Dec  3 01:21:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 2fc1d00e-bb55-41de-90e2-200846901dc1 (Updating mds.cephfs deployment (+1 -> 1))
Dec  3 01:21:12 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 2fc1d00e-bb55-41de-90e2-200846901dc1 (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Dec  3 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Dec  3 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 2 from mon.0
Dec  3 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  3 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v118: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:12 compute-0 python3[220644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:12 compute-0 podman[220682]: 2025-12-03 01:21:12.936996203 +0000 UTC m=+0.082474001 container create abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  3 01:21:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  3 01:21:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  3 01:21:12 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  3 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  3 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:12 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 46 pg[9.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:12.913214406 +0000 UTC m=+0.058692244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:13 compute-0 systemd[1]: Started libpod-conmon-abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561.scope.
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 new map
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T01:20:48.773680+0000#012modified#0112025-12-03T01:20:48.773725+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.bgmlsq{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] compat {c=[1],r=[1],i=[7ff]}]
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 3 from mon.0
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Monitors have assigned me to become a standby.
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] up:boot
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] as mds.0
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgmlsq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bgmlsq"} v 0) v1
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bgmlsq"}]: dispatch
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e3 all = 0
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e4 new map
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T01:20:48.773680+0000#012modified#0112025-12-03T01:21:13.027391+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.bgmlsq{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 4 from mon.0
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgmlsq=up:creating}
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x1
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x100
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x600
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x601
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x602
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x603
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x604
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x605
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x606
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x607
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x608
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.cache creating system inode with ino:0x609
Dec  3 01:21:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae06884a7e1ccf36ce3e4804e78741e5c58771aebe20e0809e909bd142781265/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae06884a7e1ccf36ce3e4804e78741e5c58771aebe20e0809e909bd142781265/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.0953467 +0000 UTC m=+0.240824518 container init abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.107253539 +0000 UTC m=+0.252731347 container start abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.125269687 +0000 UTC m=+0.270747525 container attach abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:21:13 compute-0 ceph-mds[220488]: mds.0.4 creating_done
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgmlsq is now active in filesystem cephfs as rank 0
Dec  3 01:21:13 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 12 completed events
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:13 compute-0 podman[220795]: 2025-12-03 01:21:13.622208039 +0000 UTC m=+0.103621422 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  3 01:21:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707348916' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  3 01:21:13 compute-0 gracious_bell[220711]: 
Dec  3 01:21:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  3 01:21:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  3 01:21:13 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec  3 01:21:13 compute-0 gracious_bell[220711]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.rxmili","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  3 01:21:13 compute-0 systemd[1]: libpod-abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561.scope: Deactivated successfully.
Dec  3 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.723519606 +0000 UTC m=+0.868997444 container died abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:21:13 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec  3 01:21:13 compute-0 podman[220795]: 2025-12-03 01:21:13.744634506 +0000 UTC m=+0.226047919 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae06884a7e1ccf36ce3e4804e78741e5c58771aebe20e0809e909bd142781265-merged.mount: Deactivated successfully.
Dec  3 01:21:13 compute-0 podman[220682]: 2025-12-03 01:21:13.83100878 +0000 UTC m=+0.976486618 container remove abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561 (image=quay.io/ceph/ceph:v18, name=gracious_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:13 compute-0 systemd[1]: libpod-conmon-abd99f0a93e6a365a5542385a098f27734a3445a06329eab5e162de08f7af561.scope: Deactivated successfully.
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  3 01:21:13 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  3 01:21:13 compute-0 ceph-mon[192821]: daemon mds.cephfs.compute-0.bgmlsq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  3 01:21:13 compute-0 ceph-mon[192821]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  3 01:21:13 compute-0 ceph-mon[192821]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  3 01:21:13 compute-0 ceph-mon[192821]: daemon mds.cephfs.compute-0.bgmlsq is now active in filesystem cephfs as rank 0
Dec  3 01:21:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  3 01:21:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  3 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec  3 01:21:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  3 01:21:14 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  3 01:21:14 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  3 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e5 new map
Dec  3 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T01:20:48.773680+0000#012modified#0112025-12-03T01:21:14.042948+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.bgmlsq{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  3 01:21:14 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq Updating MDS map to version 5 from mon.0
Dec  3 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map i am now mds.0.4
Dec  3 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec  3 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 recovery_done -- successful recovery!
Dec  3 01:21:14 compute-0 ceph-mds[220488]: mds.0.4 active_start
Dec  3 01:21:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2595993109,v1:192.168.122.100:6815/2595993109] up:active
Dec  3 01:21:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgmlsq=up:active}
Dec  3 01:21:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v121: 196 pgs: 2 unknown, 1 creating+peering, 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s wr, 4 op/s
Dec  3 01:21:14 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:21:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:21:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:14 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  3 01:21:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  3 01:21:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  3 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  3 01:21:15 compute-0 python3[220985]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:15 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  3 01:21:15 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec  3 01:21:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 48 pg[10.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:15 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec  3 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.129872588 +0000 UTC m=+0.072606612 container create c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.098227643 +0000 UTC m=+0.040961697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:15 compute-0 systemd[1]: Started libpod-conmon-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope.
Dec  3 01:21:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c171330c5e56faeb7a838cffd806ade4739b69e13a6f277983f07c4853e41a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c171330c5e56faeb7a838cffd806ade4739b69e13a6f277983f07c4853e41a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.261900435 +0000 UTC m=+0.204634489 container init c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.27097955 +0000 UTC m=+0.213713564 container start c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.276180081 +0000 UTC m=+0.218914185 container attach c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Dec  3 01:21:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688098421' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  3 01:21:15 compute-0 practical_jepsen[221055]: mimic
Dec  3 01:21:15 compute-0 systemd[1]: libpod-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope: Deactivated successfully.
Dec  3 01:21:15 compute-0 conmon[221055]: conmon c58be8a62780046628e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope/container/memory.events
Dec  3 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.859699255 +0000 UTC m=+0.802433359 container died c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c171330c5e56faeb7a838cffd806ade4739b69e13a6f277983f07c4853e41a-merged.mount: Deactivated successfully.
Dec  3 01:21:15 compute-0 podman[221009]: 2025-12-03 01:21:15.959029038 +0000 UTC m=+0.901763062 container remove c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7 (image=quay.io/ceph/ceph:v18, name=practical_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:21:15 compute-0 systemd[1]: libpod-conmon-c58be8a62780046628e0dbc754a201b104346cfb116a7b579ce5515db17947d7.scope: Deactivated successfully.
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:21:16 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/2760567315' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b51972ea-25de-4b21-a61c-86e331f6aea6 does not exist
Dec  3 01:21:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7d772d0f-3c1e-44b5-a0ff-77fd027ff35e does not exist
Dec  3 01:21:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fb9bce39-566e-46d5-a2f8-54ea31910a6b does not exist
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:21:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:21:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:21:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 2 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 14 op/s
Dec  3 01:21:16 compute-0 podman[221254]: 2025-12-03 01:21:16.697809128 +0000 UTC m=+0.169663396 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:21:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:16 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec  3 01:21:16 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec  3 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:21:17 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  3 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:21:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  3 01:21:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  3 01:21:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  3 01:21:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  3 01:21:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec  3 01:21:17 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  3 01:21:17 compute-0 python3[221340]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:17 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.189086069 +0000 UTC m=+0.080571998 container create a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.212756398 +0000 UTC m=+0.073242309 container create 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.14802478 +0000 UTC m=+0.039510819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:17 compute-0 systemd[1]: Started libpod-conmon-a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b.scope.
Dec  3 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.182145151 +0000 UTC m=+0.042631152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:17 compute-0 systemd[1]: Started libpod-conmon-6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d.scope.
Dec  3 01:21:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e746f55c848ae8cc54b53a170c447a0c4bd396ec41f0d7919fd98a37b89404b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e746f55c848ae8cc54b53a170c447a0c4bd396ec41f0d7919fd98a37b89404b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.376935874 +0000 UTC m=+0.268421843 container init a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.382286238 +0000 UTC m=+0.242772259 container init 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.393598174 +0000 UTC m=+0.285084133 container start a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.39346449 +0000 UTC m=+0.253950411 container start 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:17 compute-0 podman[221364]: 2025-12-03 01:21:17.399831532 +0000 UTC m=+0.291317531 container attach a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:17 compute-0 flamboyant_dijkstra[221397]: 167 167
Dec  3 01:21:17 compute-0 systemd[1]: libpod-6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d.scope: Deactivated successfully.
Dec  3 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.412055693 +0000 UTC m=+0.272541634 container attach 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.412465954 +0000 UTC m=+0.272951895 container died 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f160de2dfcaa116ce3823a884fcd36cb519f67bd6f31ef757f50c676b717a975-merged.mount: Deactivated successfully.
Dec  3 01:21:17 compute-0 podman[221371]: 2025-12-03 01:21:17.496120914 +0000 UTC m=+0.356606835 container remove 6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dijkstra, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 01:21:17 compute-0 systemd[1]: libpod-conmon-6d35c30d7d4fdc6c55df2e461c254877d4b214add8ff24e292b96096327a227d.scope: Deactivated successfully.
Dec  3 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.798381818 +0000 UTC m=+0.081682697 container create 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.766755604 +0000 UTC m=+0.050056523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:17 compute-0 systemd[1]: Started libpod-conmon-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope.
Dec  3 01:21:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.97611737 +0000 UTC m=+0.259418229 container init 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.993584862 +0000 UTC m=+0.276885711 container start 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:17 compute-0 podman[221424]: 2025-12-03 01:21:17.999174843 +0000 UTC m=+0.282475712 container attach 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:21:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec  3 01:21:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec  3 01:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  3 01:21:18 compute-0 awesome_black[221395]: 
Dec  3 01:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Dec  3 01:21:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4146977044' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec  3 01:21:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  3 01:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  3 01:21:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mds-cephfs-compute-0-bgmlsq[220484]: 2025-12-03T01:21:18.072+0000 7f8c6dd4f640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  3 01:21:18 compute-0 ceph-mds[220488]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  3 01:21:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  3 01:21:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  3 01:21:18 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  3 01:21:18 compute-0 awesome_black[221395]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Dec  3 01:21:18 compute-0 systemd[1]: libpod-a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b.scope: Deactivated successfully.
Dec  3 01:21:18 compute-0 podman[221465]: 2025-12-03 01:21:18.195475806 +0000 UTC m=+0.058244374 container died a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 2 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Dec  3 01:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e746f55c848ae8cc54b53a170c447a0c4bd396ec41f0d7919fd98a37b89404b6-merged.mount: Deactivated successfully.
Dec  3 01:21:18 compute-0 podman[221465]: 2025-12-03 01:21:18.28888789 +0000 UTC m=+0.151656378 container remove a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b (image=quay.io/ceph/ceph:v18, name=awesome_black, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:21:18 compute-0 systemd[1]: libpod-conmon-a6fd231ab4d949bb8a681846a25b956314a9c13121dd339135aa731dcd9c5e2b.scope: Deactivated successfully.
Dec  3 01:21:18 compute-0 radosgw[219997]: LDAP not started since no server URIs were provided in the configuration.
Dec  3 01:21:18 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-rgw-rgw-compute-0-rxmili[219992]: 2025-12-03T01:21:18.362+0000 7f7f0a2ec940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  3 01:21:18 compute-0 radosgw[219997]: framework: beast
Dec  3 01:21:18 compute-0 radosgw[219997]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  3 01:21:18 compute-0 radosgw[219997]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  3 01:21:18 compute-0 radosgw[219997]: starting handler: beast
Dec  3 01:21:18 compute-0 radosgw[219997]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 01:21:18 compute-0 radosgw[219997]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.rxmili,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=12c704e0-e093-4406-ae80-90d748b4eab2,zone_name=default,zonegroup_id=ed145ee8-3dce-430d-a202-c3985ac1478c,zonegroup_name=default}
Dec  3 01:21:18 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts
Dec  3 01:21:18 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok
Dec  3 01:21:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  3 01:21:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 01:21:19 compute-0 ceph-mon[192821]: from='client.? 192.168.122.100:0/3891097115' entity='client.rgw.rgw.compute-0.rxmili' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  3 01:21:19 compute-0 suspicious_burnell[221458]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:21:19 compute-0 suspicious_burnell[221458]: --> relative data size: 1.0
Dec  3 01:21:19 compute-0 suspicious_burnell[221458]: --> All data devices are unavailable
Dec  3 01:21:19 compute-0 systemd[1]: libpod-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope: Deactivated successfully.
Dec  3 01:21:19 compute-0 systemd[1]: libpod-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope: Consumed 1.235s CPU time.
Dec  3 01:21:19 compute-0 podman[221424]: 2025-12-03 01:21:19.34827342 +0000 UTC m=+1.631574299 container died 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e05eb8541bde2015aebc407c7bead008f74985b1b2569cf3df0c248b6b6621e-merged.mount: Deactivated successfully.
Dec  3 01:21:19 compute-0 podman[221424]: 2025-12-03 01:21:19.452731512 +0000 UTC m=+1.736032341 container remove 70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:21:19 compute-0 systemd[1]: libpod-conmon-70df358747fa65898bc96807e53c37dbb136cb87d5b2ad6e499910d443536aea.scope: Deactivated successfully.
Dec  3 01:21:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  3 01:21:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  3 01:21:20 compute-0 ceph-mon[192821]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  3 01:21:20 compute-0 ceph-mon[192821]: Cluster is now healthy
Dec  3 01:21:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 982 B/s rd, 1.3 KiB/s wr, 4 op/s
Dec  3 01:21:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.538037182 +0000 UTC m=+0.060673090 container create f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.519614444 +0000 UTC m=+0.042250372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:20 compute-0 systemd[1]: Started libpod-conmon-f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae.scope.
Dec  3 01:21:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.746344409 +0000 UTC m=+0.268980357 container init f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.768751155 +0000 UTC m=+0.291387103 container start f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.775289861 +0000 UTC m=+0.297925809 container attach f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:21:20 compute-0 elated_aryabhata[222210]: 167 167
Dec  3 01:21:20 compute-0 systemd[1]: libpod-f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae.scope: Deactivated successfully.
Dec  3 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.782311401 +0000 UTC m=+0.304947339 container died f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca7ee2198628af7d2b2e9f227896da255ab8527f88de36c8804d198851f1fd0-merged.mount: Deactivated successfully.
Dec  3 01:21:20 compute-0 podman[222194]: 2025-12-03 01:21:20.864707787 +0000 UTC m=+0.387343705 container remove f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:20 compute-0 systemd[1]: libpod-conmon-f78df721ec5f7b84b3daacc24e8d41851243428d214d8b187a323a22ad4d12ae.scope: Deactivated successfully.
Dec  3 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.115960675 +0000 UTC m=+0.076700683 container create fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.082493711 +0000 UTC m=+0.043233779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:21 compute-0 systemd[1]: Started libpod-conmon-fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7.scope.
Dec  3 01:21:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.253992494 +0000 UTC m=+0.214732512 container init fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.285922105 +0000 UTC m=+0.246662123 container start fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:21:21 compute-0 podman[222236]: 2025-12-03 01:21:21.292916294 +0000 UTC m=+0.253656312 container attach fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec  3 01:21:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec  3 01:21:22 compute-0 kind_poincare[222251]: {
Dec  3 01:21:22 compute-0 kind_poincare[222251]:    "0": [
Dec  3 01:21:22 compute-0 kind_poincare[222251]:        {
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "devices": [
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "/dev/loop3"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            ],
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_name": "ceph_lv0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_size": "21470642176",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "name": "ceph_lv0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "tags": {
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.crush_device_class": "",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.encrypted": "0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osd_id": "0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.type": "block",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.vdo": "0"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            },
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "type": "block",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "vg_name": "ceph_vg0"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:        }
Dec  3 01:21:22 compute-0 kind_poincare[222251]:    ],
Dec  3 01:21:22 compute-0 kind_poincare[222251]:    "1": [
Dec  3 01:21:22 compute-0 kind_poincare[222251]:        {
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "devices": [
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "/dev/loop4"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            ],
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_name": "ceph_lv1",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_size": "21470642176",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "name": "ceph_lv1",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "tags": {
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.crush_device_class": "",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.encrypted": "0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osd_id": "1",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.type": "block",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.vdo": "0"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            },
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "type": "block",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "vg_name": "ceph_vg1"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:        }
Dec  3 01:21:22 compute-0 kind_poincare[222251]:    ],
Dec  3 01:21:22 compute-0 kind_poincare[222251]:    "2": [
Dec  3 01:21:22 compute-0 kind_poincare[222251]:        {
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "devices": [
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "/dev/loop5"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            ],
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_name": "ceph_lv2",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_size": "21470642176",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "name": "ceph_lv2",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "tags": {
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.crush_device_class": "",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.encrypted": "0",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osd_id": "2",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.type": "block",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:                "ceph.vdo": "0"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            },
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "type": "block",
Dec  3 01:21:22 compute-0 kind_poincare[222251]:            "vg_name": "ceph_vg2"
Dec  3 01:21:22 compute-0 kind_poincare[222251]:        }
Dec  3 01:21:22 compute-0 kind_poincare[222251]:    ]
Dec  3 01:21:22 compute-0 kind_poincare[222251]: }
Dec  3 01:21:22 compute-0 systemd[1]: libpod-fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7.scope: Deactivated successfully.
Dec  3 01:21:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 KiB/s wr, 246 op/s
Dec  3 01:21:22 compute-0 podman[222260]: 2025-12-03 01:21:22.233835254 +0000 UTC m=+0.065031308 container died fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 01:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-221385a6e952c4969c7155489543a3624903b4f6a9e5a393f498c2d44194943f-merged.mount: Deactivated successfully.
Dec  3 01:21:22 compute-0 podman[222260]: 2025-12-03 01:21:22.337447263 +0000 UTC m=+0.168643227 container remove fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:21:22 compute-0 systemd[1]: libpod-conmon-fd185a5f107f3680dcb677dcff4b39fd1714196e0e13feec29369e3f32e444c7.scope: Deactivated successfully.
Dec  3 01:21:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec  3 01:21:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec  3 01:21:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec  3 01:21:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec  3 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.529438875 +0000 UTC m=+0.079684804 container create 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:21:23 compute-0 systemd[1]: Started libpod-conmon-44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea.scope.
Dec  3 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.501074599 +0000 UTC m=+0.051320618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.657359801 +0000 UTC m=+0.207605740 container init 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.671919414 +0000 UTC m=+0.222165373 container start 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.679871439 +0000 UTC m=+0.230117368 container attach 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:21:23 compute-0 pedantic_northcutt[222424]: 167 167
Dec  3 01:21:23 compute-0 systemd[1]: libpod-44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea.scope: Deactivated successfully.
Dec  3 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.684818663 +0000 UTC m=+0.235064632 container died 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-65e9487937eed16ba70e418fcf74f2aac6854b88456784a7b18af3e1d36d75a6-merged.mount: Deactivated successfully.
Dec  3 01:21:23 compute-0 podman[222409]: 2025-12-03 01:21:23.757222589 +0000 UTC m=+0.307468558 container remove 44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:21:23 compute-0 systemd[1]: libpod-conmon-44d96570f57635dd97295a96de52376e9728fed89dda38b2f9e4e86d654cc7ea.scope: Deactivated successfully.
Dec  3 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.056152575 +0000 UTC m=+0.104372841 container create 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.005976949 +0000 UTC m=+0.054197265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:24 compute-0 systemd[1]: Started libpod-conmon-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope.
Dec  3 01:21:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.0 KiB/s wr, 190 op/s
Dec  3 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.249647232 +0000 UTC m=+0.297867518 container init 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.2699203 +0000 UTC m=+0.318140576 container start 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:24 compute-0 podman[222446]: 2025-12-03 01:21:24.276858217 +0000 UTC m=+0.325078463 container attach 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:21:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  3 01:21:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  3 01:21:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:25 compute-0 nice_bohr[222463]: {
Dec  3 01:21:25 compute-0 nice_bohr[222463]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "osd_id": 2,
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "type": "bluestore"
Dec  3 01:21:25 compute-0 nice_bohr[222463]:    },
Dec  3 01:21:25 compute-0 nice_bohr[222463]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "osd_id": 1,
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "type": "bluestore"
Dec  3 01:21:25 compute-0 nice_bohr[222463]:    },
Dec  3 01:21:25 compute-0 nice_bohr[222463]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "osd_id": 0,
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:21:25 compute-0 nice_bohr[222463]:        "type": "bluestore"
Dec  3 01:21:25 compute-0 nice_bohr[222463]:    }
Dec  3 01:21:25 compute-0 nice_bohr[222463]: }
Dec  3 01:21:25 compute-0 systemd[1]: libpod-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope: Deactivated successfully.
Dec  3 01:21:25 compute-0 systemd[1]: libpod-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope: Consumed 1.085s CPU time.
Dec  3 01:21:25 compute-0 conmon[222463]: conmon 477d62980865055bf3bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope/container/memory.events
Dec  3 01:21:25 compute-0 podman[222446]: 2025-12-03 01:21:25.354623043 +0000 UTC m=+1.402843319 container died 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-647055bdf1df94acd87540ef7480273fe762b664a21d234b4a321eba2949dcf3-merged.mount: Deactivated successfully.
Dec  3 01:21:25 compute-0 podman[222446]: 2025-12-03 01:21:25.454117181 +0000 UTC m=+1.502337447 container remove 477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:21:25 compute-0 systemd[1]: libpod-conmon-477d62980865055bf3bbd519efcfa1f3e13d01cb5a35472ebf18a4b2757ad0ab.scope: Deactivated successfully.
Dec  3 01:21:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:21:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:21:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 11f94589-e536-405f-9731-6bffd45fe368 does not exist
Dec  3 01:21:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 25d082bf-00a8-4264-aad9-8ac0c5974ecb does not exist
Dec  3 01:21:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.5 KiB/s wr, 170 op/s
Dec  3 01:21:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:26 compute-0 python3[222682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.713353509 +0000 UTC m=+0.092653144 container create a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.673321388 +0000 UTC m=+0.052621083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:26 compute-0 systemd[1]: Started libpod-conmon-a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994.scope.
Dec  3 01:21:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca851a31dcf49dd594a83ec3ebaf6559f58131d8f657ce91946562b3caea05df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca851a31dcf49dd594a83ec3ebaf6559f58131d8f657ce91946562b3caea05df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.867116813 +0000 UTC m=+0.246416518 container init a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.878799939 +0000 UTC m=+0.258099574 container start a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:21:26 compute-0 podman[222695]: 2025-12-03 01:21:26.884865833 +0000 UTC m=+0.264165528 container attach a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:21:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  3 01:21:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  3 01:21:27 compute-0 relaxed_pascal[222727]: could not fetch user info: no user info saved
Dec  3 01:21:27 compute-0 systemd[1]: libpod-a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994.scope: Deactivated successfully.
Dec  3 01:21:27 compute-0 podman[222695]: 2025-12-03 01:21:27.326457833 +0000 UTC m=+0.705757518 container died a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 01:21:27 compute-0 podman[222837]: 2025-12-03 01:21:27.346209396 +0000 UTC m=+0.126956010 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca851a31dcf49dd594a83ec3ebaf6559f58131d8f657ce91946562b3caea05df-merged.mount: Deactivated successfully.
Dec  3 01:21:27 compute-0 podman[222695]: 2025-12-03 01:21:27.388448797 +0000 UTC m=+0.767748422 container remove a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994 (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:21:27 compute-0 systemd[1]: libpod-conmon-a323385278a70f194cbcca73f95f6e62d9fad3d10a7701376dac51dbeed65994.scope: Deactivated successfully.
Dec  3 01:21:27 compute-0 podman[222837]: 2025-12-03 01:21:27.455110618 +0000 UTC m=+0.235857202 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:27 compute-0 python3[222947]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 3765feb2-36f8-5b86-b74c-64e9221f9c4c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:27 compute-0 podman[222966]: 2025-12-03 01:21:27.939883635 +0000 UTC m=+0.093536928 container create 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:27 compute-0 podman[222966]: 2025-12-03 01:21:27.906617866 +0000 UTC m=+0.060271189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  3 01:21:28 compute-0 systemd[1]: Started libpod-conmon-687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384.scope.
Dec  3 01:21:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0a9957730585a21d5523f9a5ac3e6b82aff134a1abcd0199aa5c1b9a34de0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0a9957730585a21d5523f9a5ac3e6b82aff134a1abcd0199aa5c1b9a34de0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.086000442 +0000 UTC m=+0.239653785 container init 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.12 deep-scrub starts
Dec  3 01:21:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.12 deep-scrub ok
Dec  3 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.107949325 +0000 UTC m=+0.261602618 container start 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.116089775 +0000 UTC m=+0.269743118 container attach 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:21:28
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'backups', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.1 KiB/s wr, 154 op/s
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:21:28 compute-0 brave_beaver[222996]: {
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "user_id": "openstack",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "display_name": "openstack",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "email": "",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "suspended": 0,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "max_buckets": 1000,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "subusers": [],
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "keys": [
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        {
Dec  3 01:21:28 compute-0 brave_beaver[222996]:            "user": "openstack",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:            "access_key": "YB1RTHMPFEW18QROY7ER",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:            "secret_key": "TxYHATGxG73QroJOSanT2WXDGyRVB7BDnSwtbkdZ"
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        }
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    ],
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "swift_keys": [],
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "caps": [],
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "op_mask": "read, write, delete",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "default_placement": "",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "default_storage_class": "",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "placement_tags": [],
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "bucket_quota": {
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "enabled": false,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "check_on_raw": false,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "max_size": -1,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "max_size_kb": 0,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "max_objects": -1
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    },
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "user_quota": {
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "enabled": false,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "check_on_raw": false,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "max_size": -1,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "max_size_kb": 0,
Dec  3 01:21:28 compute-0 brave_beaver[222996]:        "max_objects": -1
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    },
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "temp_url_keys": [],
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "type": "rgw",
Dec  3 01:21:28 compute-0 brave_beaver[222996]:    "mfa_ids": []
Dec  3 01:21:28 compute-0 brave_beaver[222996]: }
Dec  3 01:21:28 compute-0 brave_beaver[222996]: 
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:21:28 compute-0 systemd[1]: libpod-687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384.scope: Deactivated successfully.
Dec  3 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.528956408 +0000 UTC m=+0.682609701 container died 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev de87f886-d48c-403a-b0cc-ddb32d2895f2 does not exist
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9e57237c-2656-4531-a88a-59122439576a does not exist
Dec  3 01:21:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b1d86312-6c16-4b49-8810-7b4497ac8729 does not exist
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:21:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:21:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5a0a9957730585a21d5523f9a5ac3e6b82aff134a1abcd0199aa5c1b9a34de0-merged.mount: Deactivated successfully.
Dec  3 01:21:28 compute-0 podman[222966]: 2025-12-03 01:21:28.588858876 +0000 UTC m=+0.742512129 container remove 687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384 (image=quay.io/ceph/ceph:v18, name=brave_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:28 compute-0 systemd[1]: libpod-conmon-687df95c5c595a8cfb6c2b432dc0256fc365162a4933bdd8cb271b18348be384.scope: Deactivated successfully.
Dec  3 01:21:28 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  3 01:21:28 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  3 01:21:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:21:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  3 01:21:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  3 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.627361682 +0000 UTC m=+0.088783789 container create c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.591475713 +0000 UTC m=+0.052897820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:29 compute-0 systemd[1]: Started libpod-conmon-c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359.scope.
Dec  3 01:21:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:29 compute-0 podman[158098]: time="2025-12-03T01:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.77534151 +0000 UTC m=+0.236763627 container init c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.790695725 +0000 UTC m=+0.252117832 container start c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.797125578 +0000 UTC m=+0.258547685 container attach c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34189 "" "Go-http-client/1.1"
Dec  3 01:21:29 compute-0 great_boyd[223293]: 167 167
Dec  3 01:21:29 compute-0 systemd[1]: libpod-c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359.scope: Deactivated successfully.
Dec  3 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.801982489 +0000 UTC m=+0.263404566 container died c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9de458d4527777bb17e90025fc91703dd309149b6bb395edff4260c734ee8409-merged.mount: Deactivated successfully.
Dec  3 01:21:29 compute-0 podman[223278]: 2025-12-03 01:21:29.884137249 +0000 UTC m=+0.345559346 container remove c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_boyd, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6800 "" "Go-http-client/1.1"
Dec  3 01:21:29 compute-0 systemd[1]: libpod-conmon-c588c0c8522cf36a8bb36939f1b4ee8c99d4ee193fd99a5a1c6611ca3e70d359.scope: Deactivated successfully.
Dec  3 01:21:30 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec  3 01:21:30 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec  3 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.187439363 +0000 UTC m=+0.125437480 container create 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.7 KiB/s wr, 130 op/s
Dec  3 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.147446912 +0000 UTC m=+0.085445079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:30 compute-0 systemd[1]: Started libpod-conmon-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope.
Dec  3 01:21:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.380977041 +0000 UTC m=+0.318975188 container init 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.406436089 +0000 UTC m=+0.344434186 container start 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:21:30 compute-0 podman[223316]: 2025-12-03 01:21:30.413823159 +0000 UTC m=+0.351821306 container attach 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Dec  3 01:21:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Dec  3 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:21:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:21:31 compute-0 openstack_network_exporter[160250]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:21:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:21:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  3 01:21:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  3 01:21:31 compute-0 exciting_carver[223332]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:21:31 compute-0 exciting_carver[223332]: --> relative data size: 1.0
Dec  3 01:21:31 compute-0 exciting_carver[223332]: --> All data devices are unavailable
Dec  3 01:21:31 compute-0 systemd[1]: libpod-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope: Deactivated successfully.
Dec  3 01:21:31 compute-0 systemd[1]: libpod-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope: Consumed 1.211s CPU time.
Dec  3 01:21:31 compute-0 podman[223361]: 2025-12-03 01:21:31.741497917 +0000 UTC m=+0.045531931 container died 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a385f52ed5e7a546126cea21a529bbb2e712d4ea97a96d3ce5c4d4d43947dbf1-merged.mount: Deactivated successfully.
Dec  3 01:21:31 compute-0 podman[223361]: 2025-12-03 01:21:31.817439178 +0000 UTC m=+0.121473152 container remove 88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:31 compute-0 systemd[1]: libpod-conmon-88bc8a080db5cd858738feef7a26d9dfd7f1e9bca312702741ac5e9baf29504a.scope: Deactivated successfully.
Dec  3 01:21:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  3 01:21:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  3 01:21:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 132 op/s
Dec  3 01:21:32 compute-0 podman[223503]: 2025-12-03 01:21:32.878951725 +0000 UTC m=+0.111860813 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:21:32 compute-0 podman[223502]: 2025-12-03 01:21:32.879044237 +0000 UTC m=+0.131637867 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:21:32 compute-0 podman[223504]: 2025-12-03 01:21:32.89691766 +0000 UTC m=+0.126947931 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  3 01:21:32 compute-0 podman[223505]: 2025-12-03 01:21:32.900561869 +0000 UTC m=+0.134298880 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 01:21:32 compute-0 podman[223598]: 2025-12-03 01:21:32.977440325 +0000 UTC m=+0.058863751 container create 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:33 compute-0 systemd[1]: Started libpod-conmon-667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba.scope.
Dec  3 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:32.954130886 +0000 UTC m=+0.035554302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.107426917 +0000 UTC m=+0.188850413 container init 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.120785068 +0000 UTC m=+0.202208504 container start 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.126367619 +0000 UTC m=+0.207791045 container attach 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 01:21:33 compute-0 intelligent_tu[223614]: 167 167
Dec  3 01:21:33 compute-0 systemd[1]: libpod-667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba.scope: Deactivated successfully.
Dec  3 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.134050906 +0000 UTC m=+0.215474332 container died 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab20ceccd3c583d288a0ded46de836fc2affe424cc0c5d442e8a6024064ef2c-merged.mount: Deactivated successfully.
Dec  3 01:21:33 compute-0 podman[223598]: 2025-12-03 01:21:33.207901021 +0000 UTC m=+0.289324447 container remove 667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec  3 01:21:33 compute-0 systemd[1]: libpod-conmon-667da34691532e82325bee118eb4049177ae2bf1277a1c66c53bbe1e2752abba.scope: Deactivated successfully.
Dec  3 01:21:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec  3 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.495204283 +0000 UTC m=+0.091876613 container create 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:33 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  3 01:21:33 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  3 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.460153276 +0000 UTC m=+0.056825616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:33 compute-0 systemd[1]: Started libpod-conmon-2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd.scope.
Dec  3 01:21:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.702243066 +0000 UTC m=+0.298915446 container init 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.73199106 +0000 UTC m=+0.328663400 container start 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:21:33 compute-0 podman[223636]: 2025-12-03 01:21:33.738622849 +0000 UTC m=+0.335295199 container attach 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:21:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  3 01:21:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec  3 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:21:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec  3 01:21:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 170 B/s wr, 4 op/s
Dec  3 01:21:34 compute-0 blissful_thompson[223651]: {
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:    "0": [
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:        {
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "devices": [
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "/dev/loop3"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            ],
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_name": "ceph_lv0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_size": "21470642176",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "name": "ceph_lv0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "tags": {
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.crush_device_class": "",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.encrypted": "0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osd_id": "0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.type": "block",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.vdo": "0"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            },
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "type": "block",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "vg_name": "ceph_vg0"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:        }
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:    ],
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:    "1": [
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:        {
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "devices": [
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "/dev/loop4"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            ],
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_name": "ceph_lv1",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_size": "21470642176",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "name": "ceph_lv1",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "tags": {
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.crush_device_class": "",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.encrypted": "0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osd_id": "1",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.type": "block",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.vdo": "0"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            },
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "type": "block",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "vg_name": "ceph_vg1"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:        }
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:    ],
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:    "2": [
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:        {
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "devices": [
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "/dev/loop5"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            ],
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_name": "ceph_lv2",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_size": "21470642176",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "name": "ceph_lv2",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "tags": {
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.cluster_name": "ceph",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.crush_device_class": "",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.encrypted": "0",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osd_id": "2",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.type": "block",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:                "ceph.vdo": "0"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            },
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "type": "block",
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:            "vg_name": "ceph_vg2"
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:        }
Dec  3 01:21:34 compute-0 blissful_thompson[223651]:    ]
Dec  3 01:21:34 compute-0 blissful_thompson[223651]: }
Dec  3 01:21:34 compute-0 systemd[1]: libpod-2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd.scope: Deactivated successfully.
Dec  3 01:21:34 compute-0 podman[223636]: 2025-12-03 01:21:34.526361841 +0000 UTC m=+1.123034181 container died 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:21:34 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  3 01:21:34 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  3 01:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0114df1ac583a032bc5d463be4d97651af1cfd931818f3a4942e5c9723c9a7-merged.mount: Deactivated successfully.
Dec  3 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  3 01:21:34 compute-0 podman[223636]: 2025-12-03 01:21:34.630787372 +0000 UTC m=+1.227459712 container remove 2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:21:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  3 01:21:34 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  3 01:21:34 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 233c150f-04e0-477a-98e5-41621722b9d6 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  3 01:21:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:21:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:34 compute-0 systemd[1]: libpod-conmon-2ffada55ab06243ca141c185dbf71e32b5f07195ec740c61893db8ae4e929efd.scope: Deactivated successfully.
Dec  3 01:21:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec  3 01:21:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec  3 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  3 01:21:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  3 01:21:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  3 01:21:35 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 35647f5d-0578-4b5f-a725-29df3e74f44a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  3 01:21:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:21:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec  3 01:21:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec  3 01:21:35 compute-0 podman[223811]: 2025-12-03 01:21:35.893208566 +0000 UTC m=+0.089744726 container create 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:21:35 compute-0 podman[223811]: 2025-12-03 01:21:35.855440995 +0000 UTC m=+0.051977235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:35 compute-0 systemd[1]: Started libpod-conmon-1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db.scope.
Dec  3 01:21:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.047956766 +0000 UTC m=+0.244493016 container init 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.063403153 +0000 UTC m=+0.259939343 container start 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.070073634 +0000 UTC m=+0.266609824 container attach 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:21:36 compute-0 heuristic_williamson[223826]: 167 167
Dec  3 01:21:36 compute-0 systemd[1]: libpod-1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db.scope: Deactivated successfully.
Dec  3 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.075156711 +0000 UTC m=+0.271692941 container died 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:21:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-560e9db6b0dee563d0f4501391334332f860c2d389156b89f9900bd414f76011-merged.mount: Deactivated successfully.
Dec  3 01:21:36 compute-0 podman[223811]: 2025-12-03 01:21:36.158710038 +0000 UTC m=+0.355246218 container remove 1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:36 compute-0 systemd[1]: libpod-conmon-1a4b8451c2f793daf00c7e9c48939fde2c337741ed53eef79dcbc6c7d0f7e7db.scope: Deactivated successfully.
Dec  3 01:21:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Dec  3 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.438275891 +0000 UTC m=+0.110917318 container create e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.3797582 +0000 UTC m=+0.052399667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:21:36 compute-0 systemd[1]: Started libpod-conmon-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope.
Dec  3 01:21:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.635973312 +0000 UTC m=+0.308614769 container init e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.656725282 +0000 UTC m=+0.329366709 container start e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:36 compute-0 podman[223849]: 2025-12-03 01:21:36.663196657 +0000 UTC m=+0.335838134 container attach e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  3 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  3 01:21:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  3 01:21:36 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 2025b0f0-f11a-46d2-b151-a6ffa9da5e6a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  3 01:21:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  3 01:21:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[9.0( v 51'584 (0'0,51'584] local-lis/les=45/46 n=209 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.293481827s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 51'583 mlcod 51'583 active pruub 116.351562500s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=43/44 n=4 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=14.271992683s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 44'3 active pruub 122.331710815s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=14.271992683s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 unknown pruub 122.331710815s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 54 pg[9.0( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.293481827s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 51'583 mlcod 0'0 unknown pruub 116.351562500s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec  3 01:21:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec  3 01:21:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  3 01:21:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  3 01:21:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  3 01:21:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  3 01:21:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] update: starting ev 14b2a7ad-382c-42a9-a127-e60085bd2d5d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 233c150f-04e0-477a-98e5-41621722b9d6 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 233c150f-04e0-477a-98e5-41621722b9d6 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 35647f5d-0578-4b5f-a725-29df3e74f44a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 35647f5d-0578-4b5f-a725-29df3e74f44a (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 2025b0f0-f11a-46d2-b151-a6ffa9da5e6a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 2025b0f0-f11a-46d2-b151-a6ffa9da5e6a (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] complete: finished ev 14b2a7ad-382c-42a9-a127-e60085bd2d5d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  3 01:21:37 compute-0 ceph-mgr[193109]: [progress INFO root] Completed event 14b2a7ad-382c-42a9-a127-e60085bd2d5d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec  3 01:21:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  3 01:21:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.15( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.14( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.17( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.16( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.11( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.3( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.2( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.d( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.c( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.f( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.9( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.b( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.e( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.a( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.8( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.6( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.7( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.4( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.5( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1a( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.18( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.19( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1e( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1f( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1c( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1d( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.13( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.12( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1b( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.10( v 51'584 lc 0'0 (0'0,51'584] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.0( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 51'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.2( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.a( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.4( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1a( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.14( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.12( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.10( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 55 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'584 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:37 compute-0 determined_jang[223865]: {
Dec  3 01:21:37 compute-0 determined_jang[223865]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "osd_id": 2,
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "type": "bluestore"
Dec  3 01:21:37 compute-0 determined_jang[223865]:    },
Dec  3 01:21:37 compute-0 determined_jang[223865]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "osd_id": 1,
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "type": "bluestore"
Dec  3 01:21:37 compute-0 determined_jang[223865]:    },
Dec  3 01:21:37 compute-0 determined_jang[223865]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "osd_id": 0,
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:21:37 compute-0 determined_jang[223865]:        "type": "bluestore"
Dec  3 01:21:37 compute-0 determined_jang[223865]:    }
Dec  3 01:21:37 compute-0 determined_jang[223865]: }
Dec  3 01:21:37 compute-0 systemd[1]: libpod-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope: Deactivated successfully.
Dec  3 01:21:37 compute-0 podman[223849]: 2025-12-03 01:21:37.869228129 +0000 UTC m=+1.541869586 container died e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:21:37 compute-0 systemd[1]: libpod-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope: Consumed 1.202s CPU time.
Dec  3 01:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2e17c624155ad6a4007bc429d48cea2ea702d8d30a55d3bb7b61f625489aac2-merged.mount: Deactivated successfully.
Dec  3 01:21:37 compute-0 podman[223849]: 2025-12-03 01:21:37.9780997 +0000 UTC m=+1.650741127 container remove e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:21:38 compute-0 systemd[1]: libpod-conmon-e9236b34a26ff28cc4e6b1afa8a8ecdc7cb0265a17de3efd5a59e630acafc762.scope: Deactivated successfully.
Dec  3 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d8d632f8-d5c5-43c9-ab07-ddc05fd6c2ff does not exist
Dec  3 01:21:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 661c6b45-9ad5-4e4a-8289-2f6b7280fa8a does not exist
Dec  3 01:21:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec  3 01:21:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec  3 01:21:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v141: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  3 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:38 compute-0 podman[223935]: 2025-12-03 01:21:38.343398149 +0000 UTC m=+0.127746642 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:21:38 compute-0 ceph-mgr[193109]: [progress INFO root] Writing back 16 completed events
Dec  3 01:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  3 01:21:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec  3 01:21:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec  3 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:21:38 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  3 01:21:38 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  3 01:21:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  3 01:21:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  3 01:21:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  3 01:21:39 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=47/48 n=8 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=15.955306053s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 51'63 active pruub 119.121765137s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:39 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=15.955306053s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 unknown pruub 119.121765137s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:39 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 56 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=49/50 n=2 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.681290627s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 51'1 active pruub 120.458557129s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:39 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 56 pg[11.0( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.681290627s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 unknown pruub 120.458557129s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 01:21:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  3 01:21:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  3 01:21:40 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.17( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.16( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.15( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.14( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.13( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.2( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.8( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.5( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.4( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.7( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.6( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.3( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.10( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.11( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.18( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.12( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.19( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.16( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.13( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.5( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.7( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 57 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  3 01:21:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.968 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.969 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:21:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:21:41 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  3 01:21:41 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  3 01:21:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  3 01:21:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  3 01:21:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v145: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  3 01:21:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  3 01:21:42 compute-0 podman[223982]: 2025-12-03 01:21:42.898618889 +0000 UTC m=+0.151565085 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, vcs-type=git)
Dec  3 01:21:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Dec  3 01:21:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Dec  3 01:21:43 compute-0 systemd-logind[800]: New session 40 of user zuul.
Dec  3 01:21:43 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec  3 01:21:43 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec  3 01:21:43 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec  3 01:21:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec  3 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  3 01:21:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:21:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:44 compute-0 python3.9[224156]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:21:45 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec  3 01:21:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  3 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  3 01:21:45 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec  3 01:21:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:21:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  3 01:21:45 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  3 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  3 01:21:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:21:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  3 01:21:45 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  3 01:21:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  3 01:21:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  3 01:21:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec  3 01:21:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec  3 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  3 01:21:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:21:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Dec  3 01:21:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.d( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370283127s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.194587708s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370039940s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.194450378s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385140419s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.209640503s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369914055s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.194450378s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385062218s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.209640503s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384991646s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.209678650s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385028839s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.209892273s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.385006905s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.209892273s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.d( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369626045s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.194587708s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384907722s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210037231s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384888649s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210037231s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384524345s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210105896s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384474754s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210105896s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384259224s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210113525s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384223938s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210113525s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384879112s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.209678650s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383697510s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210243225s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383665085s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210243225s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383474350s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210258484s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383479118s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210304260s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383446693s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210304260s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383396149s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210258484s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383208275s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210273743s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383172989s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210273743s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.9( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383893013s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.211059570s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.9( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383845329s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.211059570s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.e( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383638382s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.211250305s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382546425s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210205078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.e( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383591652s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.211250305s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382506371s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210205078s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384715080s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.212638855s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.384687424s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.212638855s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382074356s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.210029602s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382016182s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.210029602s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.4( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383605003s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.211791992s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383579254s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.211791992s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.14( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382976532s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.211723328s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.14( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382926941s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.211723328s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.388989449s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514541626s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.388964653s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514541626s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964451790s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.090469360s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964327812s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.090469360s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953183174s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.079544067s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953125000s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.079544067s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.15( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383259773s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 120.212539673s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383138657s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.212570190s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.15( v 57'65 (0'0,57'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383132935s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 120.212539673s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.952919006s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.079666138s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.383103371s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.212570190s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964179993s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.090972900s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382589340s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 120.212608337s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.382558823s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.212608337s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.964162827s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.090972900s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.952864647s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.079666138s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376405716s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.503578186s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376358986s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.503578186s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.15( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.375605583s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.503540039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.372826576s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.503540039s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955656052s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.091552734s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955075264s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.091552734s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.378023148s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514656067s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.377953529s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514656067s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.377737045s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514732361s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.377631187s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514732361s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954754829s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.091903687s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955168724s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.091079712s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954708099s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.091903687s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953830719s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.091079712s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955630302s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.092971802s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.955589294s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092971802s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954670906s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.092498779s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954593658s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.092498779s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376757622s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514831543s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954345703s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092498779s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376403809s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514778137s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376577377s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514831543s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376350403s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514778137s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376034737s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514846802s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.376001358s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514846802s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953623772s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.092971802s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953594208s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092971802s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953457832s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.093017578s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953430176s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.093017578s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.375778198s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515495300s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.375752449s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515495300s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.15( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954625130s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092498779s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.953042030s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094512939s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.952994347s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094512939s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.954445839s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.092956543s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950722694s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.092956543s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.372582436s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514900208s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951845169s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094528198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951808929s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094528198s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951431274s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094528198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951395988s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094528198s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.371561050s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514907837s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.371533394s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514907837s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.951010704s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094573975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950970650s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094573975s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950576782s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094573975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.950470924s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094573975s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.2( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370937347s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515220642s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370909691s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515220642s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949990273s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094589233s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949938774s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094589233s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370164871s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.514961243s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370131493s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514961243s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949840546s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094924927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949512482s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094619751s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949478149s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094619751s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.949789047s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094924927s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.370048523s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515419006s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369997025s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515419006s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369194031s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.514900208s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948913574s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094680786s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948868752s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094680786s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948781013s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094726562s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948717117s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094726562s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369411469s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515533447s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.369376183s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515533447s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.d( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.2( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368890762s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515296936s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368840218s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515296936s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948098183s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094726562s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368616104s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515251160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368573189s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515251160s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948044777s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094726562s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948143959s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094848633s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948064804s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094863892s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948086739s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094848633s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.948025703s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094863892s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368432999s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515319824s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368400574s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515319824s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947911263s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095062256s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947873116s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095062256s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368158340s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515388489s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.368126869s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515388489s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947536469s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094879150s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947506905s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094879150s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947427750s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094924927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367975235s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515487671s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947430611s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.094970703s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367946625s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515487671s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947390556s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094924927s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.947400093s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094970703s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367665291s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515464783s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367633820s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515464783s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367336273s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515449524s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946849823s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.094985962s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.367304802s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515449524s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946805954s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.094985962s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946585655s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095046997s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946508408s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095046997s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946412086s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.095062256s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.e( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946379662s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095062256s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946297646s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095077515s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.946269035s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095077515s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.366655350s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515548706s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.366628647s) [0] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515548706s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.945642471s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 133.095062256s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.945541382s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095062256s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.365485191s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 127.515541077s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=9.365409851s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.515541077s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.943103790s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 133.095077515s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:46 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 58 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=14.942958832s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.095077515s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.9( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.9( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.18( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.1f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.11( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.1d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.1c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.6( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.1f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 58 pg[8.1a( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[8.11( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 58 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Dec  3 01:21:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Dec  3 01:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  3 01:21:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  3 01:21:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  3 01:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  3 01:21:47 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.14( v 57'65 lc 51'54 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 podman[224355]: 2025-12-03 01:21:47.259661364 +0000 UTC m=+0.130818035 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 59 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=58/59 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=58/59 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=58/59 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [2] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 59 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [2] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=58/59 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.e( v 57'65 lc 51'48 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.9( v 57'65 lc 51'56 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=58/59 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=58/59 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.d( v 57'65 lc 51'50 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 59 pg[10.15( v 57'65 lc 51'46 (0'0,57'65] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:47 compute-0 python3.9[224406]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:21:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  3 01:21:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  3 01:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  3 01:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  3 01:21:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  3 01:21:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  3 01:21:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Dec  3 01:21:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  3 01:21:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  3 01:21:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  3 01:21:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  3 01:21:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 60 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=59) [0]/[1] async=[0] r=0 lpr=59 pi=[54,59)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  3 01:21:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  3 01:21:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  3 01:21:49 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  3 01:21:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  3 01:21:49 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  3 01:21:50 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  3 01:21:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v153: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 1 objects/s recovering
Dec  3 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec  3 01:21:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  3 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  3 01:21:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  3 01:21:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  3 01:21:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  3 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  3 01:21:50 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  3 01:21:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.857470512s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537094116s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.857365608s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537094116s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.855461121s) [0] async=[0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537475586s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:50 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62 pruub=14.854380608s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537475586s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 62 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:50 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  3 01:21:50 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  3 01:21:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  3 01:21:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  3 01:21:51 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  3 01:21:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.859754562s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537872314s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.859655380s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537872314s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857912064s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537689209s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857936859s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537551880s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857842445s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537689209s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857610703s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537551880s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857017517s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537139893s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856969833s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537139893s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857148170s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537719727s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.857081413s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537719727s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856276512s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537124634s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856222153s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537124634s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.856063843s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537734985s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855966568s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537734985s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855431557s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537811279s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855373383s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537811279s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855633736s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537628174s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.855039597s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537628174s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.854851723s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537811279s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.854633331s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537811279s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.851238251s) [0] async=[0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537658691s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63 pruub=13.850149155s) [0] r=-1 lpr=63 pi=[54,63)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537658691s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.11( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 63 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Dec  3 01:21:52 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Dec  3 01:21:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 6 active+remapped, 7 active+recovery_wait+remapped, 1 active+recovering+remapped, 307 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43/247 objects misplaced (17.409%); 450 B/s, 14 objects/s recovering
Dec  3 01:21:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  3 01:21:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  3 01:21:52 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819808960s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537933350s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819722176s) [0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537933350s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819305420s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537765503s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.819208145s) [0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537765503s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.817730904s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 136.537902832s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:52 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 64 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=59/60 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64 pruub=12.817616463s) [0] r=-1 lpr=64 pi=[54,64)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.537902832s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1b( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.d( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.9( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.5( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.b( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 64 pg[9.1d( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:52 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  3 01:21:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  3 01:21:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Dec  3 01:21:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Dec  3 01:21:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Dec  3 01:21:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Dec  3 01:21:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  3 01:21:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  3 01:21:53 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  3 01:21:53 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 65 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:53 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 65 pg[9.3( v 51'584 (0'0,51'584] local-lis/les=64/65 n=7 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:53 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 65 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=59/54 les/c/f=60/55/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:21:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Dec  3 01:21:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Dec  3 01:21:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 6 active+remapped, 7 active+recovery_wait+remapped, 1 active+recovering+remapped, 307 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 43/247 objects misplaced (17.409%); 230 B/s, 12 objects/s recovering
Dec  3 01:21:55 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec  3 01:21:55 compute-0 systemd[1]: session-40.scope: Consumed 9.956s CPU time.
Dec  3 01:21:55 compute-0 systemd-logind[800]: Session 40 logged out. Waiting for processes to exit.
Dec  3 01:21:55 compute-0 systemd-logind[800]: Removed session 40.
Dec  3 01:21:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:21:56 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  3 01:21:56 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  3 01:21:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 376 B/s, 17 objects/s recovering
Dec  3 01:21:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Dec  3 01:21:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  3 01:21:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  3 01:21:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  3 01:21:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  3 01:21:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  3 01:21:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  3 01:21:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  3 01:21:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  3 01:21:57 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec  3 01:21:57 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec  3 01:21:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec  3 01:21:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec  3 01:21:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  3 01:21:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  3 01:21:57 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  3 01:21:58 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Dec  3 01:21:58 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Dec  3 01:21:58 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  3 01:21:58 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  3 01:21:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 9 objects/s recovering
Dec  3 01:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Dec  3 01:21:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  3 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  3 01:21:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  3 01:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  3 01:21:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.275618553s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.091445923s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.275558472s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.091445923s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.278839111s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.095245361s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.278799057s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.095245361s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279337883s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.095932007s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279305458s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.095932007s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279449463s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 141.096679688s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:58 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 67 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=11.279399872s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.096679688s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:58 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  3 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Dec  3 01:21:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Dec  3 01:21:59 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.14 deep-scrub starts
Dec  3 01:21:59 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.14 deep-scrub ok
Dec  3 01:21:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Dec  3 01:21:59 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Dec  3 01:21:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  3 01:21:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  3 01:21:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  3 01:21:59 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:59 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 68 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:21:59 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:21:59 compute-0 podman[158098]: time="2025-12-03T01:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6810 "" "Go-http-client/1.1"
Dec  3 01:22:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 4 unknown, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 9 objects/s recovering
Dec  3 01:22:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  3 01:22:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  3 01:22:00 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 69 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:22:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:22:01 compute-0 openstack_network_exporter[160250]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:22:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:22:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  3 01:22:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  3 01:22:01 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.589989662s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.482360840s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.589857101s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.482360840s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.580208778s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.473205566s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.580095291s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.473205566s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:01 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.578581810s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.473434448s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=68/69 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.578310013s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.473434448s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.586289406s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 148.481918335s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:01 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 70 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.586196899s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.481918335s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  3 01:22:01 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  3 01:22:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 4 activating+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 24/247 objects misplaced (9.717%)
Dec  3 01:22:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Dec  3 01:22:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Dec  3 01:22:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  3 01:22:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  3 01:22:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  3 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.6( v 51'584 (0'0,51'584] local-lis/les=70/71 n=7 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:02 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 71 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70) [2] r=0 lpr=70 pi=[54,70)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec  3 01:22:03 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec  3 01:22:03 compute-0 podman[224472]: 2025-12-03 01:22:03.877639855 +0000 UTC m=+0.115524352 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:22:03 compute-0 podman[224470]: 2025-12-03 01:22:03.880352298 +0000 UTC m=+0.132191902 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:22:03 compute-0 podman[224471]: 2025-12-03 01:22:03.890190514 +0000 UTC m=+0.135064440 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 01:22:03 compute-0 podman[224473]: 2025-12-03 01:22:03.91447394 +0000 UTC m=+0.146421977 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:22:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 39 op/s; 45 B/s, 5 objects/s recovering
Dec  3 01:22:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Dec  3 01:22:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  3 01:22:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  3 01:22:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  3 01:22:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  3 01:22:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.795778275s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 active pruub 154.161010742s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.795719147s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 154.161010742s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:04 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=72 pruub=10.766449928s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=51'584 mlcod 0'0 active pruub 153.134201050s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=72 pruub=10.766380310s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 153.134201050s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.809161186s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 active pruub 154.177764893s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=11.809054375s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 154.177764893s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=12.812279701s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=51'584 mlcod 0'0 active pruub 155.181747437s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:04 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 72 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=12.811389923s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 155.181747437s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72) [2] r=0 lpr=72 pi=[63,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:04 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=72) [2] r=0 lpr=72 pi=[63,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  3 01:22:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  3 01:22:05 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[62,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[62,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=0 lpr=73 pi=[62,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=62/63 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=0 lpr=73 pi=[62,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:05 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 73 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  3 01:22:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 5 objects/s recovering
Dec  3 01:22:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Dec  3 01:22:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  3 01:22:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  3 01:22:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  3 01:22:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  3 01:22:06 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  3 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:06 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 74 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[62,73)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  3 01:22:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  3 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910098076s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 149.096481323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910028458s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.096481323s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910104752s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 149.097000122s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:06 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 74 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=10.910059929s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.097000122s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:06 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:06 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  3 01:22:07 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  3 01:22:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  3 01:22:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  3 01:22:07 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.008032799s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 51'584 active pruub 160.152435303s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.015460968s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 51'584 active pruub 160.160507202s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.015264511s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.160507202s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.014729500s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 51'584 active pruub 160.160614014s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=15.014652252s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.160614014s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75 pruub=15.014231682s) [2] async=[2] r=-1 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 51'584 active pruub 160.160644531s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=73/74 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75 pruub=15.014155388s) [2] r=-1 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.160644531s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 75 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=73/74 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=15.007957458s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 160.152435303s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:07 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 75 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 75 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec  3 01:22:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  3 01:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  3 01:22:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  3 01:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  3 01:22:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  3 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.7( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 76 pg[9.17( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:08 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 76 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:08 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 76 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[54,75)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  3 01:22:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  3 01:22:08 compute-0 podman[224563]: 2025-12-03 01:22:08.877076256 +0000 UTC m=+0.126305194 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  3 01:22:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec  3 01:22:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec  3 01:22:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  3 01:22:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  3 01:22:09 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  3 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.017930984s) [2] async=[2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 155.747756958s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=75/76 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.017811775s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.747756958s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.016313553s) [2] async=[2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 155.746994019s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=15.016211510s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.746994019s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 77 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Dec  3 01:22:09 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Dec  3 01:22:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 5 objects/s recovering
Dec  3 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec  3 01:22:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  3 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:10 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  3 01:22:10 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  3 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  3 01:22:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  3 01:22:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  3 01:22:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  3 01:22:10 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 78 pg[9.8( v 51'584 (0'0,51'584] local-lis/les=77/78 n=7 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:10 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 78 pg[9.18( v 51'584 (0'0,51'584] local-lis/les=77/78 n=6 ec=54/45 lis/c=75/54 les/c/f=76/55/0 sis=77) [2] r=0 lpr=77 pi=[54,77)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  3 01:22:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  3 01:22:11 compute-0 systemd-logind[800]: New session 41 of user zuul.
Dec  3 01:22:11 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec  3 01:22:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 111 B/s, 4 objects/s recovering
Dec  3 01:22:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Dec  3 01:22:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  3 01:22:12 compute-0 python3.9[224737]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  3 01:22:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  3 01:22:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  3 01:22:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  3 01:22:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  3 01:22:12 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  3 01:22:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  3 01:22:13 compute-0 podman[224885]: 2025-12-03 01:22:13.838872461 +0000 UTC m=+0.137016122 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible)
Dec  3 01:22:14 compute-0 python3.9[224927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:22:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 6 objects/s recovering
Dec  3 01:22:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Dec  3 01:22:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  3 01:22:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  3 01:22:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  3 01:22:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  3 01:22:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  3 01:22:14 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  3 01:22:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:15 compute-0 python3.9[225086]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:22:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  3 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.868826866s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 157.097564697s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.868764877s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.097564697s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.864498138s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 157.093978882s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:15 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 80 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=9.864447594s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.093978882s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:15 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Dec  3 01:22:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Dec  3 01:22:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  3 01:22:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  3 01:22:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  3 01:22:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  3 01:22:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  3 01:22:16 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  3 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:16 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:16 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 81 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:17 compute-0 python3.9[225239]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:22:17 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  3 01:22:17 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  3 01:22:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  3 01:22:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  3 01:22:17 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  3 01:22:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  3 01:22:17 compute-0 podman[225318]: 2025-12-03 01:22:17.904909727 +0000 UTC m=+0.147079845 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:22:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  3 01:22:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  3 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 82 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 82 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[54,81)/1 crt=51'584 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 2 objects/s recovering
Dec  3 01:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Dec  3 01:22:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  3 01:22:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  3 01:22:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  3 01:22:18 compute-0 python3.9[225418]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  3 01:22:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  3 01:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  3 01:22:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  3 01:22:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  3 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.466417313s) [2] async=[2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 165.617324829s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.466328621s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 165.617324829s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.465260506s) [2] async=[2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 active pruub 165.617401123s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:18 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=81/82 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83 pruub=15.465150833s) [2] r=-1 lpr=83 pi=[54,83)/1 crt=51'584 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 165.617401123s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:18 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 83 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:19 compute-0 python3.9[225571]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:22:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  3 01:22:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  3 01:22:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  3 01:22:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  3 01:22:19 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 84 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:19 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 84 pg[9.c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=7 ec=54/45 lis/c=81/54 les/c/f=82/55/0 sis=83) [2] r=0 lpr=83 pi=[54,83)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec  3 01:22:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  3 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  3 01:22:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  3 01:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  3 01:22:20 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  3 01:22:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  3 01:22:20 compute-0 python3.9[225722]: ansible-ansible.builtin.service_facts Invoked
Dec  3 01:22:21 compute-0 network[225739]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 01:22:21 compute-0 network[225740]: 'network-scripts' will be removed from distribution in near future.
Dec  3 01:22:21 compute-0 network[225741]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 01:22:21 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  3 01:22:21 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  3 01:22:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  3 01:22:21 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  3 01:22:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  3 01:22:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Dec  3 01:22:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Dec  3 01:22:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Dec  3 01:22:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  3 01:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  3 01:22:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  3 01:22:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  3 01:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  3 01:22:22 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  3 01:22:23 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  3 01:22:23 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  3 01:22:23 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  3 01:22:23 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  3 01:22:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  3 01:22:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  3 01:22:24 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  3 01:22:24 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec  3 01:22:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 2 objects/s recovering
Dec  3 01:22:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Dec  3 01:22:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  3 01:22:24 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec  3 01:22:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  3 01:22:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  3 01:22:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  3 01:22:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  3 01:22:24 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  3 01:22:25 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  3 01:22:25 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  3 01:22:25 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  3 01:22:25 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  3 01:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:25 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec  3 01:22:25 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec  3 01:22:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  3 01:22:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 01:22:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Dec  3 01:22:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  3 01:22:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec  3 01:22:26 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec  3 01:22:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  3 01:22:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  3 01:22:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  3 01:22:26 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  3 01:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  3 01:22:27 compute-0 python3.9[226011]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:22:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  3 01:22:28 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:22:28
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:22:28 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 01:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Dec  3 01:22:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  3 01:22:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  3 01:22:28 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:22:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:22:28 compute-0 python3.9[226161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  3 01:22:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  3 01:22:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  3 01:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  3 01:22:28 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  3 01:22:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  3 01:22:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  3 01:22:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  3 01:22:29 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  3 01:22:29 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 89 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=89 pruub=11.021018028s) [2] r=-1 lpr=89 pi=[63,89)/1 crt=51'584 mlcod 0'0 active pruub 178.180419922s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:29 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 89 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=89 pruub=11.020961761s) [2] r=-1 lpr=89 pi=[63,89)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 178.180419922s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:29 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 89 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=89) [2] r=0 lpr=89 pi=[63,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:29 compute-0 podman[158098]: time="2025-12-03T01:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec  3 01:22:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  3 01:22:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  3 01:22:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  3 01:22:30 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  3 01:22:30 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 90 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=0 lpr=90 pi=[63,90)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:30 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:30 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:30 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 90 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] r=0 lpr=90 pi=[63,90)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:22:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Dec  3 01:22:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  3 01:22:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:30 compute-0 python3.9[226315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:22:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  3 01:22:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  3 01:22:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  3 01:22:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  3 01:22:31 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  3 01:22:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  3 01:22:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  3 01:22:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  3 01:22:31 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  3 01:22:31 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 91 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=90/91 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=90) [2]/[0] async=[2] r=0 lpr=90 pi=[63,90)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:22:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:22:31 compute-0 openstack_network_exporter[160250]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:22:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:22:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  3 01:22:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  3 01:22:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  3 01:22:32 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  3 01:22:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92) [2] r=0 lpr=92 pi=[63,92)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:32 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92) [2] r=0 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:32 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=90/91 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92 pruub=15.261384964s) [2] async=[2] r=-1 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 51'584 active pruub 185.113540649s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:32 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 92 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=90/91 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92 pruub=15.261201859s) [2] r=-1 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 185.113540649s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:32 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Dec  3 01:22:32 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Dec  3 01:22:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:22:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec  3 01:22:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  3 01:22:32 compute-0 python3.9[226474]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 01:22:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  3 01:22:32 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  3 01:22:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  3 01:22:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  3 01:22:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  3 01:22:33 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  3 01:22:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  3 01:22:33 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 93 pg[9.13( v 51'584 (0'0,51'584] local-lis/les=92/93 n=6 ec=54/45 lis/c=90/63 les/c/f=91/64/0 sis=92) [2] r=0 lpr=92 pi=[63,92)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  3 01:22:33 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  3 01:22:33 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 93 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=93 pruub=14.969126701s) [1] r=-1 lpr=93 pi=[63,93)/1 crt=51'584 mlcod 0'0 active pruub 186.162857056s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:33 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 93 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=93 pruub=14.969079971s) [1] r=-1 lpr=93 pi=[63,93)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 186.162857056s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:33 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=93) [1] r=0 lpr=93 pi=[63,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:33 compute-0 python3.9[226558]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  3 01:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  3 01:22:34 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  3 01:22:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  3 01:22:34 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 94 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=0 lpr=94 pi=[63,94)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:34 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 94 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=63/64 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=0 lpr=94 pi=[63,94)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:34 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:34 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:34 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec  3 01:22:34 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec  3 01:22:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  3 01:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Dec  3 01:22:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  3 01:22:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  3 01:22:34 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  3 01:22:34 compute-0 podman[226592]: 2025-12-03 01:22:34.883821481 +0000 UTC m=+0.120736252 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Dec  3 01:22:34 compute-0 podman[226591]: 2025-12-03 01:22:34.889252291 +0000 UTC m=+0.127477518 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 01:22:34 compute-0 podman[226590]: 2025-12-03 01:22:34.913821624 +0000 UTC m=+0.151825575 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:22:34 compute-0 podman[226593]: 2025-12-03 01:22:34.923949642 +0000 UTC m=+0.154748006 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  3 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  3 01:22:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  3 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  3 01:22:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  3 01:22:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  3 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 95 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=94/95 n=6 ec=54/45 lis/c=63/63 les/c/f=64/64/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[63,94)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  3 01:22:35 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  3 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  3 01:22:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  3 01:22:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  3 01:22:35 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 95 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=95 pruub=15.210299492s) [0] r=-1 lpr=95 pi=[70,95)/1 crt=51'584 mlcod 0'0 active pruub 174.617996216s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:35 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 96 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=95 pruub=15.210191727s) [0] r=-1 lpr=95 pi=[70,95)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 174.617996216s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=95) [0] r=0 lpr=96 pi=[70,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=94/95 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96 pruub=15.794424057s) [1] async=[1] r=-1 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 51'584 active pruub 188.931213379s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:35 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=94/95 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96 pruub=15.794380188s) [1] r=-1 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 188.931213379s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96) [1] r=0 lpr=96 pi=[63,96)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:35 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 96 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96) [1] r=0 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Dec  3 01:22:35 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Dec  3 01:22:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  3 01:22:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  3 01:22:36 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  3 01:22:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Dec  3 01:22:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  3 01:22:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  3 01:22:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  3 01:22:36 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 97 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=0 lpr=97 pi=[70,97)/2 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:36 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 97 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=0 lpr=97 pi=[70,97)/2 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[70,97)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:36 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[70,97)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:36 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 97 pg[9.15( v 51'584 (0'0,51'584] local-lis/les=96/97 n=6 ec=54/45 lis/c=94/63 les/c/f=95/64/0 sis=96) [1] r=0 lpr=96 pi=[63,96)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  3 01:22:37 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  3 01:22:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  3 01:22:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  3 01:22:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  3 01:22:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  3 01:22:37 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  3 01:22:37 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 98 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=97/98 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=97) [0]/[2] async=[0] r=0 lpr=97 pi=[70,97)/2 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:22:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:22:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  3 01:22:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  3 01:22:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  3 01:22:38 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  3 01:22:38 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99) [0] r=0 lpr=99 pi=[70,99)/2 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:38 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99) [0] r=0 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:38 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=97/98 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.138869286s) [0] async=[0] r=-1 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 51'584 active pruub 177.577514648s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:38 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 99 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=97/98 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.138772011s) [0] r=-1 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 177.577514648s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  3 01:22:38 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  3 01:22:39 compute-0 podman[226810]: 2025-12-03 01:22:39.080307765 +0000 UTC m=+0.132897106 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:22:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  3 01:22:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  3 01:22:39 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  3 01:22:39 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 100 pg[9.16( v 51'584 (0'0,51'584] local-lis/les=99/100 n=6 ec=54/45 lis/c=97/70 les/c/f=98/71/0 sis=99) [0] r=0 lpr=99 pi=[70,99)/2 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec  3 01:22:39 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec  3 01:22:39 compute-0 podman[226901]: 2025-12-03 01:22:39.764164364 +0000 UTC m=+0.119026566 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:22:39 compute-0 podman[226901]: 2025-12-03 01:22:39.875128178 +0000 UTC m=+0.229990310 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:22:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  3 01:22:40 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  3 01:22:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  3 01:22:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec  3 01:22:40 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec  3 01:22:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:22:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:22:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  3 01:22:41 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  3 01:22:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Dec  3 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  3 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bb9a678d-0fae-45e4-b1a1-16b768a5f871 does not exist
Dec  3 01:22:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2abab8ee-f607-40c9-91f0-e9d4e19e5fd1 does not exist
Dec  3 01:22:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 51b9be22-dc49-4b84-82d9-80cd0a7f3329 does not exist
Dec  3 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:22:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:22:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:22:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  3 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  3 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:22:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  3 01:22:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  3 01:22:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  3 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.55683098 +0000 UTC m=+0.085376983 container create e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.518673754 +0000 UTC m=+0.047219797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:22:43 compute-0 systemd[1]: Started libpod-conmon-e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b.scope.
Dec  3 01:22:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.720762406 +0000 UTC m=+0.249308409 container init e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.73911833 +0000 UTC m=+0.267664333 container start e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:22:43 compute-0 brave_shaw[227333]: 167 167
Dec  3 01:22:43 compute-0 systemd[1]: libpod-e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b.scope: Deactivated successfully.
Dec  3 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.754336737 +0000 UTC m=+0.282882750 container attach e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.755043487 +0000 UTC m=+0.283589460 container died e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 01:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b2358cbf7342a65578f1c7824bedb51afc73cbaae83f049c7a3cfd445496635-merged.mount: Deactivated successfully.
Dec  3 01:22:43 compute-0 podman[227319]: 2025-12-03 01:22:43.83247216 +0000 UTC m=+0.361018133 container remove e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:22:43 compute-0 systemd[1]: libpod-conmon-e3640668b83fb8aabe53abec9fc57e1b32a15ce49523e30a54d95d2fff12f43b.scope: Deactivated successfully.
Dec  3 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.08983819 +0000 UTC m=+0.079904843 container create f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.053048651 +0000 UTC m=+0.043115304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:22:44 compute-0 systemd[1]: Started libpod-conmon-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope.
Dec  3 01:22:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  3 01:22:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec  3 01:22:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Dec  3 01:22:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  3 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.28411221 +0000 UTC m=+0.274178923 container init f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:22:44 compute-0 podman[227371]: 2025-12-03 01:22:44.29544166 +0000 UTC m=+0.124693371 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-type=git, architecture=x86_64)
Dec  3 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.313047413 +0000 UTC m=+0.303114046 container start f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:22:44 compute-0 podman[227357]: 2025-12-03 01:22:44.318086601 +0000 UTC m=+0.308153315 container attach f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 01:22:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  3 01:22:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  3 01:22:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  3 01:22:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  3 01:22:45 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  3 01:22:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:45 compute-0 priceless_dubinsky[227382]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:22:45 compute-0 priceless_dubinsky[227382]: --> relative data size: 1.0
Dec  3 01:22:45 compute-0 priceless_dubinsky[227382]: --> All data devices are unavailable
Dec  3 01:22:45 compute-0 systemd[1]: libpod-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope: Deactivated successfully.
Dec  3 01:22:45 compute-0 podman[227357]: 2025-12-03 01:22:45.620728774 +0000 UTC m=+1.610795477 container died f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:22:45 compute-0 systemd[1]: libpod-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope: Consumed 1.242s CPU time.
Dec  3 01:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-80205ca41da077b6ade472f0bc45db0d10e47b343d3280c789df1c0108197be2-merged.mount: Deactivated successfully.
Dec  3 01:22:45 compute-0 podman[227357]: 2025-12-03 01:22:45.743000738 +0000 UTC m=+1.733067391 container remove f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dubinsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:22:45 compute-0 systemd[1]: libpod-conmon-f34adea03fb13a6802c40e28e76c659f40fa23a0c7c0ab2da7a6e10e50b2d7b9.scope: Deactivated successfully.
Dec  3 01:22:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  3 01:22:46 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  3 01:22:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  3 01:22:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec  3 01:22:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Dec  3 01:22:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  3 01:22:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec  3 01:22:46 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec  3 01:22:46 compute-0 podman[227575]: 2025-12-03 01:22:46.851716932 +0000 UTC m=+0.082545385 container create 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:22:46 compute-0 podman[227575]: 2025-12-03 01:22:46.818778918 +0000 UTC m=+0.049607421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:22:46 compute-0 systemd[1]: Started libpod-conmon-92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c.scope.
Dec  3 01:22:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.04010925 +0000 UTC m=+0.270937713 container init 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.057498007 +0000 UTC m=+0.288326470 container start 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.064876769 +0000 UTC m=+0.295705272 container attach 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:22:47 compute-0 hardcore_knuth[227591]: 167 167
Dec  3 01:22:47 compute-0 systemd[1]: libpod-92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c.scope: Deactivated successfully.
Dec  3 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.070849813 +0000 UTC m=+0.301678326 container died 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c742a451224b77b42ebcf673a5ca6f5d6212e439ac6fbbd8fa46ffea884f64b-merged.mount: Deactivated successfully.
Dec  3 01:22:47 compute-0 podman[227575]: 2025-12-03 01:22:47.159461844 +0000 UTC m=+0.390290297 container remove 92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:22:47 compute-0 systemd[1]: libpod-conmon-92ed2ead8eb477108dcce747e488de1446a23bef778c808911c743b0ffbb4b9c.scope: Deactivated successfully.
Dec  3 01:22:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  3 01:22:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  3 01:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  3 01:22:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  3 01:22:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  3 01:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  3 01:22:47 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  3 01:22:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec  3 01:22:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec  3 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.459244236 +0000 UTC m=+0.103119690 container create d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.429157701 +0000 UTC m=+0.073033145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:22:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 103 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=103 pruub=9.852219582s) [2] r=-1 lpr=103 pi=[64,103)/1 crt=51'584 mlcod 0'0 active pruub 195.177246094s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:47 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 103 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=103 pruub=9.852044106s) [2] r=-1 lpr=103 pi=[64,103)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 195.177246094s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:47 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=103) [2] r=0 lpr=103 pi=[64,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:47 compute-0 systemd[1]: Started libpod-conmon-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope.
Dec  3 01:22:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.657170745 +0000 UTC m=+0.301046189 container init d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.675515919 +0000 UTC m=+0.319391343 container start d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:22:47 compute-0 podman[227614]: 2025-12-03 01:22:47.682655364 +0000 UTC m=+0.326530798 container attach d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:22:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  3 01:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Dec  3 01:22:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  3 01:22:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  3 01:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  3 01:22:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  3 01:22:48 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[64,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:48 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[64,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:48 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 104 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=0 lpr=104 pi=[64,104)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:48 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 104 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=64/65 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] r=0 lpr=104 pi=[64,104)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]: {
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:    "0": [
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:        {
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "devices": [
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "/dev/loop3"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            ],
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_name": "ceph_lv0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_size": "21470642176",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "name": "ceph_lv0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "tags": {
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cluster_name": "ceph",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.crush_device_class": "",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.encrypted": "0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osd_id": "0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.type": "block",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.vdo": "0"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            },
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "type": "block",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "vg_name": "ceph_vg0"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:        }
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:    ],
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:    "1": [
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:        {
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "devices": [
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "/dev/loop4"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            ],
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_name": "ceph_lv1",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_size": "21470642176",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "name": "ceph_lv1",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "tags": {
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cluster_name": "ceph",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.crush_device_class": "",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.encrypted": "0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osd_id": "1",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.type": "block",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.vdo": "0"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            },
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "type": "block",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "vg_name": "ceph_vg1"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:        }
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:    ],
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:    "2": [
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:        {
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "devices": [
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "/dev/loop5"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            ],
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_name": "ceph_lv2",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_size": "21470642176",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "name": "ceph_lv2",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "tags": {
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.cluster_name": "ceph",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.crush_device_class": "",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.encrypted": "0",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osd_id": "2",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.type": "block",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:                "ceph.vdo": "0"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            },
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "type": "block",
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:            "vg_name": "ceph_vg2"
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:        }
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]:    ]
Dec  3 01:22:48 compute-0 laughing_goldstine[227627]: }
Dec  3 01:22:48 compute-0 systemd[1]: libpod-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope: Deactivated successfully.
Dec  3 01:22:48 compute-0 conmon[227627]: conmon d2ad25cc24cd95b15638 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope/container/memory.events
Dec  3 01:22:48 compute-0 podman[227614]: 2025-12-03 01:22:48.516288512 +0000 UTC m=+1.160163936 container died d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b289cd5d73a0337b23d2f25629b0eec2199a095019a98e5408fed6261427230-merged.mount: Deactivated successfully.
Dec  3 01:22:48 compute-0 podman[227614]: 2025-12-03 01:22:48.624793418 +0000 UTC m=+1.268668842 container remove d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goldstine, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:22:48 compute-0 systemd[1]: libpod-conmon-d2ad25cc24cd95b156389ef51a63cef0526835b5be096242fbaed5a5c575c14b.scope: Deactivated successfully.
Dec  3 01:22:48 compute-0 podman[227641]: 2025-12-03 01:22:48.718262262 +0000 UTC m=+0.163031813 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:22:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  3 01:22:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  3 01:22:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  3 01:22:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  3 01:22:49 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  3 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.766380943 +0000 UTC m=+0.090663127 container create 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.728059732 +0000 UTC m=+0.052341956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:22:49 compute-0 systemd[1]: Started libpod-conmon-1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225.scope.
Dec  3 01:22:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.902648961 +0000 UTC m=+0.226931195 container init 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.920123551 +0000 UTC m=+0.244405735 container start 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:22:49 compute-0 clever_dubinsky[227847]: 167 167
Dec  3 01:22:49 compute-0 systemd[1]: libpod-1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225.scope: Deactivated successfully.
Dec  3 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.929129388 +0000 UTC m=+0.253411622 container attach 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:22:49 compute-0 podman[227826]: 2025-12-03 01:22:49.937335013 +0000 UTC m=+0.261617197 container died 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 01:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-00ec8186e83add3c50326dee57e9090d54f6f8e0d99f3b59b16de38250330549-merged.mount: Deactivated successfully.
Dec  3 01:22:50 compute-0 podman[227826]: 2025-12-03 01:22:50.020014011 +0000 UTC m=+0.344296195 container remove 1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dubinsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:22:50 compute-0 systemd[1]: libpod-conmon-1c5a0d301c33c8ae02d2836b56c0c36415efcbaf3c9828ef803665cb2e459225.scope: Deactivated successfully.
Dec  3 01:22:50 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 105 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=104/105 n=6 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=104) [2]/[0] async=[2] r=0 lpr=104 pi=[64,104)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11/247 objects misplaced (4.453%)
Dec  3 01:22:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.341952452 +0000 UTC m=+0.108502887 container create 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:22:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  3 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.294690516 +0000 UTC m=+0.061240991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:22:50 compute-0 systemd[1]: Started libpod-conmon-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope.
Dec  3 01:22:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  3 01:22:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:22:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  3 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.529679772 +0000 UTC m=+0.296230217 container init 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.541154466 +0000 UTC m=+0.307704871 container start 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:22:50 compute-0 podman[227870]: 2025-12-03 01:22:50.548023715 +0000 UTC m=+0.314574150 container attach 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:22:51 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec  3 01:22:51 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec  3 01:22:51 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  3 01:22:51 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  3 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  3 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  3 01:22:51 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  3 01:22:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=104/105 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106 pruub=14.746579170s) [2] async=[2] r=-1 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 51'584 active pruub 204.004882812s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:51 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=104/105 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106 pruub=14.746412277s) [2] r=-1 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 204.004882812s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:51 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106) [2] r=0 lpr=106 pi=[64,106)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:51 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 106 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106) [2] r=0 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:51 compute-0 nice_joliot[227892]: {
Dec  3 01:22:51 compute-0 nice_joliot[227892]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "osd_id": 2,
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "type": "bluestore"
Dec  3 01:22:51 compute-0 nice_joliot[227892]:    },
Dec  3 01:22:51 compute-0 nice_joliot[227892]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "osd_id": 1,
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "type": "bluestore"
Dec  3 01:22:51 compute-0 nice_joliot[227892]:    },
Dec  3 01:22:51 compute-0 nice_joliot[227892]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "osd_id": 0,
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:22:51 compute-0 nice_joliot[227892]:        "type": "bluestore"
Dec  3 01:22:51 compute-0 nice_joliot[227892]:    }
Dec  3 01:22:51 compute-0 nice_joliot[227892]: }
Dec  3 01:22:51 compute-0 systemd[1]: libpod-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope: Deactivated successfully.
Dec  3 01:22:51 compute-0 podman[227870]: 2025-12-03 01:22:51.645617722 +0000 UTC m=+1.412168127 container died 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:22:51 compute-0 systemd[1]: libpod-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope: Consumed 1.093s CPU time.
Dec  3 01:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-446edad9d286f17430d53d6e40075d8ca6b7312e91d79e1bb8080d589982bd88-merged.mount: Deactivated successfully.
Dec  3 01:22:51 compute-0 podman[227870]: 2025-12-03 01:22:51.754626392 +0000 UTC m=+1.521176807 container remove 97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 01:22:51 compute-0 systemd[1]: libpod-conmon-97ca828246667053e39bdc765042bbf175c7af8ae72d70e5f0912a71d78ce68a.scope: Deactivated successfully.
Dec  3 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:22:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:22:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 85c914a6-0973-46fc-8145-be06b55ef61b does not exist
Dec  3 01:22:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 35e69fdf-0d0a-4833-a23f-23db4ee8aa04 does not exist
Dec  3 01:22:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 1 activating+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11/247 objects misplaced (4.453%)
Dec  3 01:22:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  3 01:22:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  3 01:22:52 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  3 01:22:52 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 107 pg[9.19( v 51'584 (0'0,51'584] local-lis/les=106/107 n=6 ec=54/45 lis/c=104/64 les/c/f=105/65/0 sis=106) [2] r=0 lpr=106 pi=[64,106)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:22:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:22:52 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  3 01:22:53 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  3 01:22:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  3 01:22:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  3 01:22:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 01:22:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Dec  3 01:22:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  3 01:22:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  3 01:22:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  3 01:22:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  3 01:22:54 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  3 01:22:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  3 01:22:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  3 01:22:55 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  3 01:22:55 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  3 01:22:55 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  3 01:22:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:22:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  3 01:22:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 01:22:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Dec  3 01:22:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  3 01:22:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  3 01:22:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  3 01:22:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  3 01:22:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  3 01:22:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  3 01:22:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  3 01:22:56 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  3 01:22:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Dec  3 01:22:57 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Dec  3 01:22:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  3 01:22:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  3 01:22:58 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  3 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 109 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=109 pruub=9.550034523s) [0] r=-1 lpr=109 pi=[83,109)/1 crt=51'584 mlcod 0'0 active pruub 191.896911621s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 109 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=109 pruub=9.549950600s) [0] r=-1 lpr=109 pi=[83,109)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 191.896911621s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:58 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=109) [0] r=0 lpr=109 pi=[83,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 01:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Dec  3 01:22:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  3 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  3 01:22:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  3 01:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  3 01:22:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  3 01:22:58 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  3 01:22:58 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[83,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:58 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[83,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 110 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=0 lpr=110 pi=[83,110)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:22:58 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 110 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] r=0 lpr=110 pi=[83,110)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:22:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  3 01:22:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  3 01:22:59 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  3 01:22:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  3 01:22:59 compute-0 podman[158098]: time="2025-12-03T01:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec  3 01:23:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:00 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 111 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=110/111 n=6 ec=54/45 lis/c=83/83 les/c/f=84/84/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[83,110)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:23:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  3 01:23:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  3 01:23:00 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  3 01:23:00 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=110/111 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112 pruub=15.621837616s) [0] async=[0] r=-1 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 51'584 active pruub 200.365066528s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:00 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=110/111 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112 pruub=15.621644974s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 200.365066528s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:23:00 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:00 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 112 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:23:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:23:01 compute-0 openstack_network_exporter[160250]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:23:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:23:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  3 01:23:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  3 01:23:01 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  3 01:23:01 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 113 pg[9.1c( v 51'584 (0'0,51'584] local-lis/les=112/113 n=6 ec=54/45 lis/c=110/83 les/c/f=111/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:23:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec  3 01:23:02 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec  3 01:23:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  3 01:23:02 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  3 01:23:03 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  3 01:23:03 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  3 01:23:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  3 01:23:05 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  3 01:23:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:05 compute-0 podman[228034]: 2025-12-03 01:23:05.867490022 +0000 UTC m=+0.108663951 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:23:05 compute-0 podman[228036]: 2025-12-03 01:23:05.888220181 +0000 UTC m=+0.121869624 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:23:05 compute-0 podman[228035]: 2025-12-03 01:23:05.888312783 +0000 UTC m=+0.128449554 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec  3 01:23:05 compute-0 podman[228037]: 2025-12-03 01:23:05.937814571 +0000 UTC m=+0.160739990 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:23:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Dec  3 01:23:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec  3 01:23:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  3 01:23:06 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec  3 01:23:06 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec  3 01:23:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  3 01:23:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  3 01:23:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  3 01:23:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  3 01:23:06 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  3 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 114 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=114 pruub=14.875950813s) [0] r=-1 lpr=114 pi=[70,114)/1 crt=51'584 mlcod 0'0 active pruub 206.620376587s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 114 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=114 pruub=14.875855446s) [0] r=-1 lpr=114 pi=[70,114)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 206.620376587s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:23:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=114) [0] r=0 lpr=114 pi=[70,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:23:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  3 01:23:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  3 01:23:07 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  3 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 115 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=0 lpr=115 pi=[70,115)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:07 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 115 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=70/71 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=0 lpr=115 pi=[70,115)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:23:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  3 01:23:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:07 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[70,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:23:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Dec  3 01:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  3 01:23:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  3 01:23:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  3 01:23:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  3 01:23:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  3 01:23:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 116 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=116 pruub=11.509423256s) [1] r=-1 lpr=116 pi=[75,116)/1 crt=51'584 mlcod 0'0 active pruub 204.445098877s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:08 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 116 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=116 pruub=11.508624077s) [1] r=-1 lpr=116 pi=[75,116)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 204.445098877s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:23:08 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:23:08 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  3 01:23:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  3 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 116 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=115/116 n=6 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[70,115)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:23:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  3 01:23:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  3 01:23:09 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117) [0] r=0 lpr=117 pi=[70,117)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:09 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117) [0] r=0 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:23:09 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  3 01:23:09 compute-0 podman[228122]: 2025-12-03 01:23:09.891219467 +0000 UTC m=+0.143905179 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:23:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[75,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:09 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[75,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  3 01:23:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=115/116 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117 pruub=15.444588661s) [0] async=[0] r=-1 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 51'584 active pruub 209.424819946s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=0 lpr=117 pi=[75,117)/1 crt=51'584 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=115/116 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117 pruub=15.444419861s) [0] r=-1 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 209.424819946s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:23:09 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 117 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=75/76 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] r=0 lpr=117 pi=[75,117)/1 crt=51'584 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  3 01:23:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Dec  3 01:23:09 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Dec  3 01:23:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  3 01:23:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  3 01:23:10 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  3 01:23:10 compute-0 ceph-osd[206633]: osd.0 pg_epoch: 118 pg[9.1e( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=115/70 les/c/f=116/71/0 sis=117) [0] r=0 lpr=117 pi=[70,117)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec  3 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec  3 01:23:11 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  3 01:23:11 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  3 01:23:11 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 118 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=75/75 les/c/f=76/76/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[75,117)/1 crt=51'584 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  3 01:23:11 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  3 01:23:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:12 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Dec  3 01:23:12 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Dec  3 01:23:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  3 01:23:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  3 01:23:12 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  3 01:23:12 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119) [1] r=0 lpr=119 pi=[75,119)/1 luod=0'0 crt=51'584 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:12 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=0/0 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119) [1] r=0 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 01:23:12 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119 pruub=14.921385765s) [1] async=[1] r=-1 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 51'584 active pruub 211.960662842s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  3 01:23:12 compute-0 ceph-osd[208731]: osd.2 pg_epoch: 119 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=117/118 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119 pruub=14.921197891s) [1] r=-1 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 0'0 unknown NOTIFY pruub 211.960662842s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 01:23:13 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec  3 01:23:13 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec  3 01:23:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Dec  3 01:23:13 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Dec  3 01:23:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  3 01:23:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  3 01:23:13 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  3 01:23:13 compute-0 ceph-osd[207705]: osd.1 pg_epoch: 120 pg[9.1f( v 51'584 (0'0,51'584] local-lis/les=119/120 n=6 ec=54/45 lis/c=117/75 les/c/f=118/76/0 sis=119) [1] r=0 lpr=119 pi=[75,119)/1 crt=51'584 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 01:23:14 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Dec  3 01:23:14 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Dec  3 01:23:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:14 compute-0 podman[228140]: 2025-12-03 01:23:14.860947491 +0000 UTC m=+0.121238976 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc.)
Dec  3 01:23:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  3 01:23:17 compute-0 python3.9[228311]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:23:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Dec  3 01:23:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec  3 01:23:18 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec  3 01:23:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  3 01:23:18 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  3 01:23:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Dec  3 01:23:19 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Dec  3 01:23:19 compute-0 podman[228577]: 2025-12-03 01:23:19.878942879 +0000 UTC m=+0.132739223 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:23:20 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec  3 01:23:20 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec  3 01:23:20 compute-0 python3.9[228626]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  3 01:23:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec  3 01:23:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:21 compute-0 python3.9[228781]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  3 01:23:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 1 objects/s recovering
Dec  3 01:23:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Dec  3 01:23:22 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Dec  3 01:23:22 compute-0 python3.9[228933]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:23:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Dec  3 01:23:22 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Dec  3 01:23:23 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Dec  3 01:23:23 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Dec  3 01:23:24 compute-0 python3.9[229085]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  3 01:23:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Dec  3 01:23:24 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec  3 01:23:24 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec  3 01:23:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:25 compute-0 python3.9[229237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:23:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Dec  3 01:23:26 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec  3 01:23:26 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec  3 01:23:27 compute-0 python3.9[229389]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:23:27 compute-0 python3.9[229467]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:23:27 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Dec  3 01:23:27 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:23:28
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'backups', 'vms', 'default.rgw.meta']
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:23:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:23:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec  3 01:23:29 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec  3 01:23:29 compute-0 python3.9[229619]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:23:29 compute-0 podman[158098]: time="2025-12-03T01:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec  3 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec  3 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec  3 01:23:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:30 compute-0 python3.9[229774]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  3 01:23:30 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  3 01:23:30 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  3 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec  3 01:23:30 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec  3 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:23:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:23:31 compute-0 openstack_network_exporter[160250]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:23:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:23:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  3 01:23:31 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  3 01:23:32 compute-0 python3.9[229927]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  3 01:23:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:32 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec  3 01:23:32 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec  3 01:23:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec  3 01:23:33 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec  3 01:23:33 compute-0 python3.9[230081]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 01:23:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:34 compute-0 python3.9[230235]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  3 01:23:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:35 compute-0 python3.9[230387]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:23:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:36 compute-0 systemd[194622]: Created slice User Background Tasks Slice.
Dec  3 01:23:36 compute-0 systemd[194622]: Starting Cleanup of User's Temporary Files and Directories...
Dec  3 01:23:36 compute-0 systemd[194622]: Finished Cleanup of User's Temporary Files and Directories.
Dec  3 01:23:36 compute-0 podman[230395]: 2025-12-03 01:23:36.873343067 +0000 UTC m=+0.102647203 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 01:23:36 compute-0 podman[230390]: 2025-12-03 01:23:36.888899059 +0000 UTC m=+0.142579974 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:23:36 compute-0 podman[230391]: 2025-12-03 01:23:36.890194491 +0000 UTC m=+0.126159350 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec  3 01:23:36 compute-0 podman[230398]: 2025-12-03 01:23:36.915206905 +0000 UTC m=+0.139745044 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 01:23:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  3 01:23:36 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  3 01:23:37 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec  3 01:23:37 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:23:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:23:38 compute-0 python3.9[230622]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:23:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  3 01:23:38 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  3 01:23:39 compute-0 python3.9[230774]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:23:39 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Dec  3 01:23:39 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Dec  3 01:23:40 compute-0 podman[230852]: 2025-12-03 01:23:40.174892749 +0000 UTC m=+0.125151686 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:23:40 compute-0 python3.9[230853]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:23:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.968 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.969 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.969 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.973 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:23:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:23:41 compute-0 python3.9[231022]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:23:42 compute-0 python3.9[231100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:23:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  3 01:23:42 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  3 01:23:43 compute-0 python3.9[231253]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:23:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec  3 01:23:43 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec  3 01:23:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec  3 01:23:45 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec  3 01:23:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:45 compute-0 podman[231379]: 2025-12-03 01:23:45.873478602 +0000 UTC m=+0.124122330 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, container_name=kepler, name=ubi9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4)
Dec  3 01:23:46 compute-0 python3.9[231421]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:23:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Dec  3 01:23:46 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Dec  3 01:23:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec  3 01:23:47 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec  3 01:23:47 compute-0 python3.9[231576]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  3 01:23:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec  3 01:23:47 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec  3 01:23:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec  3 01:23:47 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec  3 01:23:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec  3 01:23:48 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec  3 01:23:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:48 compute-0 python3.9[231726]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:23:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec  3 01:23:48 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec  3 01:23:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  3 01:23:48 compute-0 ceph-osd[207705]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  3 01:23:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec  3 01:23:49 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec  3 01:23:50 compute-0 python3.9[231881]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:23:50 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  3 01:23:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:23:50 compute-0 podman[231883]: 2025-12-03 01:23:50.379299947 +0000 UTC m=+0.124660604 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:23:50 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec  3 01:23:50 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  3 01:23:50 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  3 01:23:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Dec  3 01:23:50 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Dec  3 01:23:50 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  3 01:23:51 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec  3 01:23:51 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec  3 01:23:52 compute-0 python3.9[232066]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  3 01:23:52 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec  3 01:23:52 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec  3 01:23:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec  3 01:23:53 compute-0 ceph-osd[208731]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec  3 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:23:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ef044f53-83e6-4493-a979-c18aaf0720e1 does not exist
Dec  3 01:23:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f127b0c4-f378-4b12-8718-44e035b60329 does not exist
Dec  3 01:23:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 54816d78-661a-4bf4-a89f-704629f58347 does not exist
Dec  3 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:23:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:23:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Dec  3 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:23:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:23:53 compute-0 ceph-osd[206633]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Dec  3 01:23:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.419839981 +0000 UTC m=+0.088771532 container create a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.384960704 +0000 UTC m=+0.053892355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:23:54 compute-0 systemd[1]: Started libpod-conmon-a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553.scope.
Dec  3 01:23:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.570917572 +0000 UTC m=+0.239849203 container init a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.594220025 +0000 UTC m=+0.263151566 container start a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.598699545 +0000 UTC m=+0.267631186 container attach a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:23:54 compute-0 adoring_tu[232440]: 167 167
Dec  3 01:23:54 compute-0 systemd[1]: libpod-a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553.scope: Deactivated successfully.
Dec  3 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.604931018 +0000 UTC m=+0.273862649 container died a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f63e05110fad165aad105e48689c650f0e3d2ed12c98f27dc7d1fa5620554f-merged.mount: Deactivated successfully.
Dec  3 01:23:54 compute-0 podman[232387]: 2025-12-03 01:23:54.673670836 +0000 UTC m=+0.342602397 container remove a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:23:54 compute-0 systemd[1]: libpod-conmon-a2a7a2ab44ed001b50b81ff70804a0303a105a6bba6f257bf2b5dd4fc4a70553.scope: Deactivated successfully.
Dec  3 01:26:09 compute-0 rsyslogd[188612]: imjournal: 1750 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 01:26:09 compute-0 podman[244242]: 2025-12-03 01:26:09.163100619 +0000 UTC m=+0.113419923 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:26:09 compute-0 podman[244243]: 2025-12-03 01:26:09.16888378 +0000 UTC m=+0.115240173 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 01:26:09 compute-0 podman[244244]: 2025-12-03 01:26:09.179400864 +0000 UTC m=+0.118279718 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:26:09 compute-0 podman[244245]: 2025-12-03 01:26:09.204146106 +0000 UTC m=+0.140884910 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:26:09 compute-0 python3.9[244348]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:10 compute-0 python3.9[244433]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:26:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:11 compute-0 python3.9[244585]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:12 compute-0 podman[244737]: 2025-12-03 01:26:12.14667532 +0000 UTC m=+0.113591927 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  3 01:26:12 compute-0 python3.9[244738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:12 compute-0 python3.9[244834]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:14 compute-0 python3.9[245086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:26:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a6aff1a-18dd-4e25-a770-d2a0162934d4 does not exist
Dec  3 01:26:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8fb424ba-0dd5-4144-9a48-14c3d3e2f647 does not exist
Dec  3 01:26:14 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 20a07829-5a21-4bd3-8243-bf40b0f87793 does not exist
Dec  3 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:26:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:26:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:26:14 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:26:14 compute-0 python3.9[245213]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.758367817 +0000 UTC m=+0.087783176 container create 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.722897625 +0000 UTC m=+0.052313074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:26:15 compute-0 systemd[1]: Started libpod-conmon-7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846.scope.
Dec  3 01:26:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.912018003 +0000 UTC m=+0.241433402 container init 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.929137772 +0000 UTC m=+0.258553151 container start 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.936067586 +0000 UTC m=+0.265483015 container attach 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:26:15 compute-0 vigorous_bartik[245443]: 167 167
Dec  3 01:26:15 compute-0 systemd[1]: libpod-7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846.scope: Deactivated successfully.
Dec  3 01:26:15 compute-0 podman[245408]: 2025-12-03 01:26:15.942513186 +0000 UTC m=+0.271928575 container died 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1638cfae26e73462d871b7c5fab0252684ffedc2e77d4acf81eba37ff3cf1c6c-merged.mount: Deactivated successfully.
Dec  3 01:26:16 compute-0 podman[245408]: 2025-12-03 01:26:16.042691597 +0000 UTC m=+0.372106986 container remove 7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:26:16 compute-0 systemd[1]: libpod-conmon-7db2e8357fd04ddfd78caab34252b0f78c467d6b4d01d85734489193e18d6846.scope: Deactivated successfully.
Dec  3 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.325355922 +0000 UTC m=+0.081025527 container create d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.293667096 +0000 UTC m=+0.049336761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:26:16 compute-0 systemd[1]: Started libpod-conmon-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope.
Dec  3 01:26:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.497989979 +0000 UTC m=+0.253659564 container init d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec  3 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.522446523 +0000 UTC m=+0.278116128 container start d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:26:16 compute-0 podman[245523]: 2025-12-03 01:26:16.530144889 +0000 UTC m=+0.285814494 container attach d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 01:26:16 compute-0 python3.9[245531]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:26:16 compute-0 systemd[1]: Reloading.
Dec  3 01:26:16 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:26:16 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:26:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:17 compute-0 dreamy_merkle[245541]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:26:17 compute-0 dreamy_merkle[245541]: --> relative data size: 1.0
Dec  3 01:26:17 compute-0 dreamy_merkle[245541]: --> All data devices are unavailable
Dec  3 01:26:17 compute-0 systemd[1]: libpod-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope: Deactivated successfully.
Dec  3 01:26:17 compute-0 systemd[1]: libpod-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope: Consumed 1.243s CPU time.
Dec  3 01:26:17 compute-0 podman[245523]: 2025-12-03 01:26:17.834205945 +0000 UTC m=+1.589875520 container died d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbef025294150dc5b37e8d2a426c5e49f7bebcb4339077727f3edcd777db06df-merged.mount: Deactivated successfully.
Dec  3 01:26:17 compute-0 podman[245523]: 2025-12-03 01:26:17.925076236 +0000 UTC m=+1.680745841 container remove d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:26:17 compute-0 systemd[1]: libpod-conmon-d16ffa72fb37b040fccbee760c71af8e915248b67c18f387b97da4b140724c27.scope: Deactivated successfully.
Dec  3 01:26:18 compute-0 python3.9[245773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:18 compute-0 python3.9[245951]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.146204413 +0000 UTC m=+0.092487757 container create aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.113119638 +0000 UTC m=+0.059403032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:26:19 compute-0 systemd[1]: Started libpod-conmon-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope.
Dec  3 01:26:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.314992003 +0000 UTC m=+0.261275387 container init aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.328780608 +0000 UTC m=+0.275063942 container start aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:26:19 compute-0 focused_ramanujan[246067]: 167 167
Dec  3 01:26:19 compute-0 podman[246015]: 2025-12-03 01:26:19.334705594 +0000 UTC m=+0.280988978 container attach aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:26:19 compute-0 systemd[1]: libpod-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope: Deactivated successfully.
Dec  3 01:26:19 compute-0 conmon[246067]: conmon aa348ac29a4fbf3ffc77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope/container/memory.events
Dec  3 01:26:19 compute-0 podman[246051]: 2025-12-03 01:26:19.382025347 +0000 UTC m=+0.159404448 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=edpm, release-0.7.12=, vcs-type=git, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 01:26:19 compute-0 podman[246105]: 2025-12-03 01:26:19.408629091 +0000 UTC m=+0.052787357 container died aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b7f30fbd89e55700eb5ee25c17253f7c90b4f8988a9b1f75cd830b1f176910b-merged.mount: Deactivated successfully.
Dec  3 01:26:19 compute-0 podman[246105]: 2025-12-03 01:26:19.527367732 +0000 UTC m=+0.171525978 container remove aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:26:19 compute-0 systemd[1]: libpod-conmon-aa348ac29a4fbf3ffc77ceb62622999f4e1b1f4013090b21fd46b8f85b1a999f.scope: Deactivated successfully.
Dec  3 01:26:19 compute-0 podman[246197]: 2025-12-03 01:26:19.826051894 +0000 UTC m=+0.077847268 container create ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:26:19 compute-0 podman[246197]: 2025-12-03 01:26:19.795013796 +0000 UTC m=+0.046809260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:26:19 compute-0 systemd[1]: Started libpod-conmon-ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155.scope.
Dec  3 01:26:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:19 compute-0 python3.9[246217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.010691047 +0000 UTC m=+0.262486511 container init ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.032169038 +0000 UTC m=+0.283964442 container start ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.039787391 +0000 UTC m=+0.291582855 container attach ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:26:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:20 compute-0 python3.9[246304]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]: {
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:    "0": [
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:        {
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "devices": [
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "/dev/loop3"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            ],
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_name": "ceph_lv0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_size": "21470642176",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "name": "ceph_lv0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "tags": {
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cluster_name": "ceph",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.crush_device_class": "",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.encrypted": "0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osd_id": "0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:26:20 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.type": "block",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.vdo": "0"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            },
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "type": "block",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "vg_name": "ceph_vg0"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:        }
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:    ],
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:    "1": [
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:        {
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "devices": [
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "/dev/loop4"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            ],
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_name": "ceph_lv1",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_size": "21470642176",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "name": "ceph_lv1",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "tags": {
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cluster_name": "ceph",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.crush_device_class": "",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.encrypted": "0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osd_id": "1",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.type": "block",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.vdo": "0"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            },
Dec  3 01:26:20 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "type": "block",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "vg_name": "ceph_vg1"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:        }
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:    ],
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:    "2": [
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:        {
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "devices": [
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "/dev/loop5"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            ],
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_name": "ceph_lv2",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_size": "21470642176",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "name": "ceph_lv2",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "tags": {
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.cluster_name": "ceph",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.crush_device_class": "",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.encrypted": "0",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osd_id": "2",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.type": "block",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:                "ceph.vdo": "0"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            },
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "type": "block",
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:            "vg_name": "ceph_vg2"
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:        }
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]:    ]
Dec  3 01:26:20 compute-0 thirsty_taussig[246222]: }
Dec  3 01:26:20 compute-0 systemd[1]: libpod-ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155.scope: Deactivated successfully.
Dec  3 01:26:20 compute-0 podman[246197]: 2025-12-03 01:26:20.928736379 +0000 UTC m=+1.180531813 container died ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 01:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-476b290fd756a478c79272bda1d3bdf1fdad6f8ba6b5466c97aeb6cf7ed9ff6f-merged.mount: Deactivated successfully.
Dec  3 01:26:21 compute-0 podman[246197]: 2025-12-03 01:26:21.022367468 +0000 UTC m=+1.274162882 container remove ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:26:21 compute-0 systemd[1]: libpod-conmon-ce8e7f62f146b0f79078e6a870d8a862f953c9f13ca5fd266f8a3ae2204af155.scope: Deactivated successfully.
Dec  3 01:26:21 compute-0 python3.9[246570]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:26:21 compute-0 systemd[1]: Reloading.
Dec  3 01:26:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:22 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:26:22 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.159017142 +0000 UTC m=+0.077307313 container create f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.127206902 +0000 UTC m=+0.045497113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:26:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:22 compute-0 systemd[1]: Started libpod-conmon-f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a.scope.
Dec  3 01:26:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.497450846 +0000 UTC m=+0.415741097 container init f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.51441432 +0000 UTC m=+0.432704521 container start f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:26:22 compute-0 systemd[1]: Starting Create netns directory...
Dec  3 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.52047933 +0000 UTC m=+0.438769531 container attach f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:26:22 compute-0 xenodochial_cerf[246664]: 167 167
Dec  3 01:26:22 compute-0 systemd[1]: libpod-f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a.scope: Deactivated successfully.
Dec  3 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.52800125 +0000 UTC m=+0.446291421 container died f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:26:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 01:26:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 01:26:22 compute-0 systemd[1]: Finished Create netns directory.
Dec  3 01:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1fc553474b9911d72b93ceb0797c3f30ecebf8b3bb67976fbb715e186bf4cd0-merged.mount: Deactivated successfully.
Dec  3 01:26:22 compute-0 podman[246663]: 2025-12-03 01:26:22.59093695 +0000 UTC m=+0.143508284 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:26:22 compute-0 podman[246611]: 2025-12-03 01:26:22.597752571 +0000 UTC m=+0.516042742 container remove f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cerf, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:26:22 compute-0 systemd[1]: libpod-conmon-f0aed331c3a01d55a88cd920241ad1b2dbb230c5146f57824ee4e4e28487fe1a.scope: Deactivated successfully.
Dec  3 01:26:22 compute-0 podman[246735]: 2025-12-03 01:26:22.832447594 +0000 UTC m=+0.078145386 container create 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:26:22 compute-0 podman[246735]: 2025-12-03 01:26:22.794066411 +0000 UTC m=+0.039764213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:26:22 compute-0 systemd[1]: Started libpod-conmon-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope.
Dec  3 01:26:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:26:23 compute-0 podman[246735]: 2025-12-03 01:26:23.021400518 +0000 UTC m=+0.267098350 container init 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:26:23 compute-0 podman[246735]: 2025-12-03 01:26:23.038487356 +0000 UTC m=+0.284185148 container start 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:26:23 compute-0 podman[246735]: 2025-12-03 01:26:23.045254535 +0000 UTC m=+0.290952307 container attach 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:26:23 compute-0 python3.9[246880]: ansible-ansible.builtin.service_facts Invoked
Dec  3 01:26:23 compute-0 network[246908]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 01:26:23 compute-0 network[246909]: 'network-scripts' will be removed from distribution in near future.
Dec  3 01:26:23 compute-0 network[246910]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 01:26:24 compute-0 quirky_wilson[246773]: {
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "osd_id": 2,
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "type": "bluestore"
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:    },
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "osd_id": 1,
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "type": "bluestore"
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:    },
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "osd_id": 0,
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:        "type": "bluestore"
Dec  3 01:26:24 compute-0 quirky_wilson[246773]:    }
Dec  3 01:26:24 compute-0 quirky_wilson[246773]: }
Dec  3 01:26:24 compute-0 podman[246933]: 2025-12-03 01:26:24.389752092 +0000 UTC m=+0.054380411 container died 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:26:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:24 compute-0 systemd[1]: libpod-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope: Deactivated successfully.
Dec  3 01:26:24 compute-0 systemd[1]: libpod-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope: Consumed 1.257s CPU time.
Dec  3 01:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf437a68565c6aacc30599f4288aac464be10197017df9cce1966cb082c2b03-merged.mount: Deactivated successfully.
Dec  3 01:26:24 compute-0 podman[246933]: 2025-12-03 01:26:24.960251156 +0000 UTC m=+0.624879405 container remove 789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:26:24 compute-0 systemd[1]: libpod-conmon-789b0e0c4daf721be697afe9b4b2d8b659ed490c73373b7bd3755cff6b0b73fc.scope: Deactivated successfully.
Dec  3 01:26:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:26:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:26:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:26:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:26:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e9c18877-f89c-4991-927d-a5d177ee540d does not exist
Dec  3 01:26:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e4a8a479-2831-48a0-b3af-0f2bc9bb80fe does not exist
Dec  3 01:26:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:26:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:26:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:26:28
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:28 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:26:28 compute-0 systemd[1]: session-24.scope: Consumed 2min 52.148s CPU time.
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:26:28 compute-0 systemd-logind[800]: Session 24 logged out. Waiting for processes to exit.
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:26:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:26:28 compute-0 systemd-logind[800]: Removed session 24.
Dec  3 01:26:29 compute-0 podman[158098]: time="2025-12-03T01:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6840 "" "Go-http-client/1.1"
Dec  3 01:26:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:30 compute-0 python3.9[247264]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:31 compute-0 python3.9[247343]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:26:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:26:31 compute-0 openstack_network_exporter[160250]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:26:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:26:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:32 compute-0 python3.9[247495]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:33 compute-0 python3.9[247647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:34 compute-0 python3.9[247725]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:35 compute-0 python3.9[247877]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  3 01:26:35 compute-0 systemd[1]: Starting Time & Date Service...
Dec  3 01:26:35 compute-0 systemd[1]: Started Time & Date Service.
Dec  3 01:26:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:37 compute-0 python3.9[248033]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:26:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:26:38 compute-0 python3.9[248185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:39 compute-0 python3.9[248263]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:39 compute-0 podman[248386]: 2025-12-03 01:26:39.86415654 +0000 UTC m=+0.102493997 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:26:39 compute-0 podman[248388]: 2025-12-03 01:26:39.871953208 +0000 UTC m=+0.112179472 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, vcs-type=git)
Dec  3 01:26:39 compute-0 podman[248389]: 2025-12-03 01:26:39.885866454 +0000 UTC m=+0.123589772 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 01:26:39 compute-0 podman[248390]: 2025-12-03 01:26:39.905848136 +0000 UTC m=+0.134872268 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 01:26:40 compute-0 python3.9[248491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:40 compute-0 python3.9[248576]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.maty7npy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:41 compute-0 python3.9[248728]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:42 compute-0 podman[248778]: 2025-12-03 01:26:42.384174099 +0000 UTC m=+0.144137541 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:26:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:42 compute-0 python3.9[248825]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:43 compute-0 python3.9[248978]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:26:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:45 compute-0 python3[249131]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 01:26:46 compute-0 python3.9[249283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:46 compute-0 python3.9[249361]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:48 compute-0 python3.9[249513]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:48 compute-0 python3.9[249591]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:49 compute-0 podman[249716]: 2025-12-03 01:26:49.711634192 +0000 UTC m=+0.162528595 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, distribution-scope=public)
Dec  3 01:26:49 compute-0 python3.9[249762]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:50 compute-0 python3.9[249840]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:51 compute-0 python3.9[249992]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:52 compute-0 python3.9[250071]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:52 compute-0 podman[250104]: 2025-12-03 01:26:52.89206594 +0000 UTC m=+0.142898934 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:26:53 compute-0 python3.9[250245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:26:54 compute-0 python3.9[250323]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:55 compute-0 python3.9[250476]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:26:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:56 compute-0 python3.9[250633]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:26:58 compute-0 python3.9[250785]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:26:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:26:59 compute-0 python3.9[250937]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:26:59 compute-0 podman[158098]: time="2025-12-03T01:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6824 "" "Go-http-client/1.1"
Dec  3 01:27:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:00 compute-0 python3.9[251089]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:27:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:27:01 compute-0 openstack_network_exporter[160250]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:27:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:27:01 compute-0 python3.9[251241]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 01:27:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:02 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec  3 01:27:02 compute-0 systemd[1]: session-46.scope: Consumed 52.864s CPU time.
Dec  3 01:27:02 compute-0 systemd-logind[800]: Session 46 logged out. Waiting for processes to exit.
Dec  3 01:27:02 compute-0 systemd-logind[800]: Removed session 46.
Dec  3 01:27:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:06 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  3 01:27:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:08 compute-0 systemd-logind[800]: New session 47 of user zuul.
Dec  3 01:27:08 compute-0 systemd[1]: Started Session 47 of User zuul.
Dec  3 01:27:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:09 compute-0 python3.9[251427]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  3 01:27:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:10 compute-0 podman[251551]: 2025-12-03 01:27:10.450246176 +0000 UTC m=+0.122616903 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:27:10 compute-0 podman[251552]: 2025-12-03 01:27:10.450169713 +0000 UTC m=+0.121676084 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 01:27:10 compute-0 podman[251553]: 2025-12-03 01:27:10.467217535 +0000 UTC m=+0.136749725 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  3 01:27:10 compute-0 podman[251554]: 2025-12-03 01:27:10.482276656 +0000 UTC m=+0.143400579 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:27:10 compute-0 python3.9[251652]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:27:12 compute-0 python3.9[251815]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  3 01:27:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:12 compute-0 podman[251935]: 2025-12-03 01:27:12.892164165 +0000 UTC m=+0.141698647 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  3 01:27:13 compute-0 python3.9[251987]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.2pvr_1rq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:27:14 compute-0 python3.9[252112]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.2pvr_1rq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725232.3443868-44-30616758922603/.source.2pvr_1rq _original_basename=._0mku4us follow=False checksum=9a092da6e6f6a5987ec5f2d86818ad0135a14436 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:27:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:15 compute-0 python3.9[252264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:27:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.111674) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237111711, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1658, "num_deletes": 250, "total_data_size": 2387445, "memory_usage": 2422744, "flush_reason": "Manual Compaction"}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237127995, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1392420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7321, "largest_seqno": 8978, "table_properties": {"data_size": 1387020, "index_size": 2412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15780, "raw_average_key_size": 20, "raw_value_size": 1374191, "raw_average_value_size": 1803, "num_data_blocks": 114, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725080, "oldest_key_time": 1764725080, "file_creation_time": 1764725237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16445 microseconds, and 8896 cpu microseconds.
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.128113) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1392420 bytes OK
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.128143) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.131429) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.131475) EVENT_LOG_v1 {"time_micros": 1764725237131463, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.131504) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2380077, prev total WAL file size 2380077, number of live WAL files 2.
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.133075) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1359KB)], [20(6873KB)]
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237133201, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8431192, "oldest_snapshot_seqno": -1}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3370 keys, 6759710 bytes, temperature: kUnknown
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237213412, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6759710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6734056, "index_size": 16137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80646, "raw_average_key_size": 23, "raw_value_size": 6669968, "raw_average_value_size": 1979, "num_data_blocks": 716, "num_entries": 3370, "num_filter_entries": 3370, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.214149) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6759710 bytes
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.216365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.0 rd, 84.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.7 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(10.9) write-amplify(4.9) OK, records in: 3809, records dropped: 439 output_compression: NoCompression
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.216387) EVENT_LOG_v1 {"time_micros": 1764725237216377, "job": 6, "event": "compaction_finished", "compaction_time_micros": 80280, "compaction_time_cpu_micros": 39290, "output_level": 6, "num_output_files": 1, "total_output_size": 6759710, "num_input_records": 3809, "num_output_records": 3370, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237217333, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725237219458, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.132900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:27:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:27:17.219625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:27:18 compute-0 python3.9[252418]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUXzfc0dZJxCJJ4PEHADvL0LyRTIDw765KVVRPjKe66bZHCDrMnH3lZh13FtxojtEeAMtDjWC+H3ZGbvKAjyg6wN6ZmxRsL7o57jFWbBEQCHr3VQojAmFhu1UrX7NiAqOVCHai4lYrpddO28T1lK3oP3KKbw3gMA9o0GCA5TlMf5uAu10Zmp6u/NuST5GBQqc8D2ID2cZ5OL+IJ5OedhsuV0SutU2S7A/ua95d57ddgc8ltJh/JzrnYCjHsD4NNKpp1HDuLXzKlMVFpbxi5ihzlepdP4BMWtBqKzvoCCD+KxwXBNVjKLo57B/h+kfTNX/PI8IkDAGLOxYZyPozHtsLiKtTLao7Q1nU67ZcSZbDPBluTaBcUuiS12fEsU2SjMVNRPDFBKj8pn5cXmIZJaLccIvvWYr4u9xIEA1aX0IjZS9FEHD+eVLVe3HkQ+rFJ2WgMARupAMDmyso43Cje+xIL0vZYayq3PyCWhVln1wW80k/cY/5JCqhzF2lelqLBlU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICuIgcpw897dA3mGBxBK8DwsvfOOhRnRBasT73h7OlLn#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITA4C6TXl/AXsVGH1teKmoFi3piNxhosC0B5paSBiifwK5pyHq3w8pYOtVe+KhAjGKZJREVbl0k3rnMeNo31ps=#012 create=True mode=0644 path=/tmp/ansible.2pvr_1rq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:27:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:19 compute-0 podman[252571]: 2025-12-03 01:27:19.981473 +0000 UTC m=+0.088812989 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, release-0.7.12=, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Dec  3 01:27:20 compute-0 python3.9[252572]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2pvr_1rq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:27:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:21 compute-0 python3.9[252743]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2pvr_1rq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:27:21 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec  3 01:27:21 compute-0 systemd[1]: session-47.scope: Consumed 9.232s CPU time.
Dec  3 01:27:21 compute-0 systemd-logind[800]: Session 47 logged out. Waiting for processes to exit.
Dec  3 01:27:21 compute-0 systemd-logind[800]: Removed session 47.
Dec  3 01:27:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:23 compute-0 podman[252768]: 2025-12-03 01:27:23.887154059 +0000 UTC m=+0.137672253 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:27:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:27 compute-0 systemd-logind[800]: New session 48 of user zuul.
Dec  3 01:27:27 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec  3 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:27:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0f6c8f22-165d-4480-884c-5eb5c81c100a does not exist
Dec  3 01:27:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 613899ee-81b0-4d0a-be50-ff31874b8fac does not exist
Dec  3 01:27:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 568ea4b2-e755-43db-b1e6-b664ef9ba457 does not exist
Dec  3 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:27:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:27:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:27:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:27:28
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.492471117 +0000 UTC m=+0.087403406 container create 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.453474744 +0000 UTC m=+0.048407073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:27:28 compute-0 systemd[1]: Started libpod-conmon-3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f.scope.
Dec  3 01:27:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.637866566 +0000 UTC m=+0.232798895 container init 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.655125964 +0000 UTC m=+0.250058243 container start 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.661630393 +0000 UTC m=+0.256562662 container attach 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 01:27:28 compute-0 funny_goldberg[253227]: 167 167
Dec  3 01:27:28 compute-0 systemd[1]: libpod-3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f.scope: Deactivated successfully.
Dec  3 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.67003614 +0000 UTC m=+0.264968469 container died 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:27:28 compute-0 python3.9[253211]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e10405bbab55ad30d09317b6afc7343e2f1996a53361bfa49c2bd088ee5e3e1f-merged.mount: Deactivated successfully.
Dec  3 01:27:28 compute-0 podman[253209]: 2025-12-03 01:27:28.767230414 +0000 UTC m=+0.362162673 container remove 3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:27:28 compute-0 systemd[1]: libpod-conmon-3f88b93b9a66a2a55bf947a279d8b3a965fd9eaa76fc219d608f5a7b76c7539f.scope: Deactivated successfully.
Dec  3 01:27:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.058959401 +0000 UTC m=+0.097210626 container create 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.022836586 +0000 UTC m=+0.061087861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:27:29 compute-0 systemd[1]: Started libpod-conmon-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope.
Dec  3 01:27:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.233747399 +0000 UTC m=+0.271998644 container init 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.258768665 +0000 UTC m=+0.297019870 container start 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:27:29 compute-0 podman[253255]: 2025-12-03 01:27:29.265929574 +0000 UTC m=+0.304180779 container attach 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:27:29 compute-0 podman[158098]: time="2025-12-03T01:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34530 "" "Go-http-client/1.1"
Dec  3 01:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7252 "" "Go-http-client/1.1"
Dec  3 01:27:30 compute-0 python3.9[253439]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  3 01:27:30 compute-0 suspicious_feynman[253295]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:27:30 compute-0 suspicious_feynman[253295]: --> relative data size: 1.0
Dec  3 01:27:30 compute-0 suspicious_feynman[253295]: --> All data devices are unavailable
Dec  3 01:27:30 compute-0 podman[253255]: 2025-12-03 01:27:30.657976209 +0000 UTC m=+1.696227424 container died 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:27:30 compute-0 systemd[1]: libpod-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope: Deactivated successfully.
Dec  3 01:27:30 compute-0 systemd[1]: libpod-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope: Consumed 1.332s CPU time.
Dec  3 01:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-04fc488a14387855cab0cedbe9014de8b882620df1cbc99fa9c52b0ecb3db12c-merged.mount: Deactivated successfully.
Dec  3 01:27:30 compute-0 podman[253255]: 2025-12-03 01:27:30.738491482 +0000 UTC m=+1.776742667 container remove 4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:27:30 compute-0 systemd[1]: libpod-conmon-4e661ea338fa1b4c3383570735b1970c015ced0f4c8f86f1d133d16daab0507e.scope: Deactivated successfully.
Dec  3 01:27:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:27:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:27:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:27:31 compute-0 openstack_network_exporter[160250]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.672688168 +0000 UTC m=+0.087022544 container create 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.636869972 +0000 UTC m=+0.051204408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:27:31 compute-0 systemd[1]: Started libpod-conmon-51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a.scope.
Dec  3 01:27:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.815484918 +0000 UTC m=+0.229819354 container init 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.827128454 +0000 UTC m=+0.241462830 container start 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:27:31 compute-0 python3.9[253753]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.834170109 +0000 UTC m=+0.248504545 container attach 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:27:31 compute-0 angry_ritchie[253772]: 167 167
Dec  3 01:27:31 compute-0 systemd[1]: libpod-51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a.scope: Deactivated successfully.
Dec  3 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.841261526 +0000 UTC m=+0.255595872 container died 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec  3 01:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c276cbda9a423c440865bd8c892181b5ca6069dbabb660a32c322696fc10b255-merged.mount: Deactivated successfully.
Dec  3 01:27:31 compute-0 podman[253756]: 2025-12-03 01:27:31.895857987 +0000 UTC m=+0.310192333 container remove 51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ritchie, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:27:31 compute-0 systemd[1]: libpod-conmon-51dfa9f3675284c95a5d9801c56bc255513b2b72b4ecf1cdcab9f858da385a9a.scope: Deactivated successfully.
Dec  3 01:27:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.169401477 +0000 UTC m=+0.098696641 container create 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.127509475 +0000 UTC m=+0.056804669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:27:32 compute-0 systemd[1]: Started libpod-conmon-86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd.scope.
Dec  3 01:27:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.349365134 +0000 UTC m=+0.278660338 container init 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.37931411 +0000 UTC m=+0.308609254 container start 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:27:32 compute-0 podman[253820]: 2025-12-03 01:27:32.38584276 +0000 UTC m=+0.315137904 container attach 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:27:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:33 compute-0 jolly_germain[253859]: {
Dec  3 01:27:33 compute-0 jolly_germain[253859]:    "0": [
Dec  3 01:27:33 compute-0 jolly_germain[253859]:        {
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "devices": [
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "/dev/loop3"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            ],
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_name": "ceph_lv0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_size": "21470642176",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "name": "ceph_lv0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "tags": {
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cluster_name": "ceph",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.crush_device_class": "",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.encrypted": "0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osd_id": "0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.type": "block",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.vdo": "0"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            },
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "type": "block",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "vg_name": "ceph_vg0"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:        }
Dec  3 01:27:33 compute-0 jolly_germain[253859]:    ],
Dec  3 01:27:33 compute-0 jolly_germain[253859]:    "1": [
Dec  3 01:27:33 compute-0 jolly_germain[253859]:        {
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "devices": [
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "/dev/loop4"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            ],
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_name": "ceph_lv1",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_size": "21470642176",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "name": "ceph_lv1",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "tags": {
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cluster_name": "ceph",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.crush_device_class": "",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.encrypted": "0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osd_id": "1",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.type": "block",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.vdo": "0"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            },
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "type": "block",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "vg_name": "ceph_vg1"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:        }
Dec  3 01:27:33 compute-0 jolly_germain[253859]:    ],
Dec  3 01:27:33 compute-0 jolly_germain[253859]:    "2": [
Dec  3 01:27:33 compute-0 jolly_germain[253859]:        {
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "devices": [
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "/dev/loop5"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            ],
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_name": "ceph_lv2",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_size": "21470642176",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "name": "ceph_lv2",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "tags": {
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.cluster_name": "ceph",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.crush_device_class": "",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.encrypted": "0",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osd_id": "2",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.type": "block",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:                "ceph.vdo": "0"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            },
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "type": "block",
Dec  3 01:27:33 compute-0 jolly_germain[253859]:            "vg_name": "ceph_vg2"
Dec  3 01:27:33 compute-0 jolly_germain[253859]:        }
Dec  3 01:27:33 compute-0 jolly_germain[253859]:    ]
Dec  3 01:27:33 compute-0 jolly_germain[253859]: }
Dec  3 01:27:33 compute-0 systemd[1]: libpod-86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd.scope: Deactivated successfully.
Dec  3 01:27:33 compute-0 podman[253820]: 2025-12-03 01:27:33.165956411 +0000 UTC m=+1.095251515 container died 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d735332863a644b0c0efac0c5e4641b513eb2d00100d7c7300fc366839bf094-merged.mount: Deactivated successfully.
Dec  3 01:27:33 compute-0 podman[253820]: 2025-12-03 01:27:33.233385104 +0000 UTC m=+1.162680198 container remove 86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_germain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:27:33 compute-0 systemd[1]: libpod-conmon-86abc304136b6d86e3c3443694ffe5ae08b05341336f00c663a7c54e9881d1cd.scope: Deactivated successfully.
Dec  3 01:27:33 compute-0 python3.9[253972]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.289170209 +0000 UTC m=+0.078891665 container create 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.249427713 +0000 UTC m=+0.039149229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:27:34 compute-0 systemd[1]: Started libpod-conmon-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope.
Dec  3 01:27:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.440573942 +0000 UTC m=+0.230295398 container init 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.459271134 +0000 UTC m=+0.248992580 container start 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.465955979 +0000 UTC m=+0.255677505 container attach 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:27:34 compute-0 vibrant_lichterman[254291]: 167 167
Dec  3 01:27:34 compute-0 systemd[1]: libpod-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope: Deactivated successfully.
Dec  3 01:27:34 compute-0 conmon[254291]: conmon 8726b693011e25fe5245 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope/container/memory.events
Dec  3 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.474093088 +0000 UTC m=+0.263814564 container died 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-928d0652422507db5ae2751f53c25f366ebaf3a1a0fdce15a205530f2cb1c568-merged.mount: Deactivated successfully.
Dec  3 01:27:34 compute-0 podman[254233]: 2025-12-03 01:27:34.543919045 +0000 UTC m=+0.333640471 container remove 8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:27:34 compute-0 systemd[1]: libpod-conmon-8726b693011e25fe5245cc96c44280ca0a04abdfcbc5e6f108de53b8f8a86412.scope: Deactivated successfully.
Dec  3 01:27:34 compute-0 python3.9[254295]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:27:34 compute-0 podman[254321]: 2025-12-03 01:27:34.812451331 +0000 UTC m=+0.076040667 container create acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:27:34 compute-0 podman[254321]: 2025-12-03 01:27:34.780865175 +0000 UTC m=+0.044454561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:27:34 compute-0 systemd[1]: Started libpod-conmon-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope.
Dec  3 01:27:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:27:34 compute-0 podman[254321]: 2025-12-03 01:27:34.998881396 +0000 UTC m=+0.262470772 container init acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:27:35 compute-0 podman[254321]: 2025-12-03 01:27:35.018872878 +0000 UTC m=+0.282462224 container start acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:27:35 compute-0 podman[254321]: 2025-12-03 01:27:35.025832381 +0000 UTC m=+0.289421767 container attach acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:27:36 compute-0 python3.9[254493]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]: {
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "osd_id": 2,
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "type": "bluestore"
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:    },
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "osd_id": 1,
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "type": "bluestore"
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:    },
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "osd_id": 0,
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:        "type": "bluestore"
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]:    }
Dec  3 01:27:36 compute-0 goofy_jepsen[254360]: }
Dec  3 01:27:36 compute-0 systemd[1]: libpod-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope: Deactivated successfully.
Dec  3 01:27:36 compute-0 podman[254321]: 2025-12-03 01:27:36.223267351 +0000 UTC m=+1.486856697 container died acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:27:36 compute-0 systemd[1]: libpod-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope: Consumed 1.204s CPU time.
Dec  3 01:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb8c1f70a516f2567047a6743a9c7e5a6308bb5b14caf1cd5511dcd95c20075d-merged.mount: Deactivated successfully.
Dec  3 01:27:36 compute-0 podman[254321]: 2025-12-03 01:27:36.316822044 +0000 UTC m=+1.580411390 container remove acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jepsen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:27:36 compute-0 systemd[1]: libpod-conmon-acf0b7980b80d8cb16c586ab0bf8ab4e76e934a2446bd39ccf5e413bb4acd8a4.scope: Deactivated successfully.
Dec  3 01:27:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:27:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:27:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:27:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:27:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 90e13e77-fb15-4bc4-8bd6-20443c6ae463 does not exist
Dec  3 01:27:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c51daf6a-6c43-4906-a6d5-70e927b50160 does not exist
Dec  3 01:27:36 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec  3 01:27:36 compute-0 systemd[1]: session-48.scope: Consumed 6.806s CPU time.
Dec  3 01:27:36 compute-0 systemd-logind[800]: Session 48 logged out. Waiting for processes to exit.
Dec  3 01:27:36 compute-0 systemd-logind[800]: Removed session 48.
Dec  3 01:27:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:27:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:27:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:27:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2037 writes, 9033 keys, 2037 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2037 writes, 2037 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2037 writes, 9033 keys, 2037 commit groups, 1.0 writes per commit group, ingest: 10.85 MB, 0.02 MB/s#012Interval WAL: 2037 writes, 2037 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.1      0.08              0.04         3    0.027       0      0       0.0       0.0#012  L6      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     96.0     85.4      0.15              0.08         2    0.077    7146    729       0.0       0.0#012 Sum      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     62.6     89.8      0.24              0.11         5    0.047    7146    729       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     63.3     90.7      0.23              0.11         4    0.058    7146    729       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     96.0     85.4      0.15              0.08         2    0.077    7146    729       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    100.9      0.08              0.04         2    0.040       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 508.19 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,421.47 KB,0.133633%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 01:27:40 compute-0 podman[254606]: 2025-12-03 01:27:40.883837931 +0000 UTC m=+0.123099598 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:27:40 compute-0 podman[254608]: 2025-12-03 01:27:40.892975131 +0000 UTC m=+0.127796232 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 01:27:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:40 compute-0 podman[254607]: 2025-12-03 01:27:40.907191436 +0000 UTC m=+0.145543965 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, vcs-type=git)
Dec  3 01:27:40 compute-0 podman[254609]: 2025-12-03 01:27:40.925831706 +0000 UTC m=+0.156215251 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.970 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.970 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.970 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.974 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.990 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.991 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:27:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:27:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:42 compute-0 systemd-logind[800]: New session 49 of user zuul.
Dec  3 01:27:42 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec  3 01:27:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:43 compute-0 podman[254817]: 2025-12-03 01:27:43.734010117 +0000 UTC m=+0.105137740 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  3 01:27:44 compute-0 python3.9[254860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:27:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:45 compute-0 python3.9[255019]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 01:27:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:47 compute-0 python3.9[255103]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 01:27:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:50 compute-0 podman[255231]: 2025-12-03 01:27:50.17738656 +0000 UTC m=+0.135553671 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9)
Dec  3 01:27:50 compute-0 python3.9[255274]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:27:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:52 compute-0 python3.9[255429]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 01:27:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:53 compute-0 python3.9[255580]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:27:54 compute-0 podman[255704]: 2025-12-03 01:27:54.857166168 +0000 UTC m=+0.109588994 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:27:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:55 compute-0 python3.9[255754]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:27:55 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec  3 01:27:55 compute-0 systemd[1]: session-49.scope: Consumed 9.405s CPU time.
Dec  3 01:27:55 compute-0 systemd-logind[800]: Session 49 logged out. Waiting for processes to exit.
Dec  3 01:27:55 compute-0 systemd-logind[800]: Removed session 49.
Dec  3 01:27:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:27:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:27:59 compute-0 podman[158098]: time="2025-12-03T01:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6827 "" "Go-http-client/1.1"
Dec  3 01:28:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:28:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:28:01 compute-0 openstack_network_exporter[160250]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:28:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:28:01 compute-0 systemd-logind[800]: New session 50 of user zuul.
Dec  3 01:28:01 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec  3 01:28:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:03 compute-0 python3.9[255937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:28:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:07 compute-0 python3.9[256094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:08 compute-0 python3.9[256246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:09 compute-0 python3.9[256398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:10 compute-0 python3.9[256476]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:11 compute-0 podman[256629]: 2025-12-03 01:28:11.092001291 +0000 UTC m=+0.124537731 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350)
Dec  3 01:28:11 compute-0 podman[256630]: 2025-12-03 01:28:11.099021281 +0000 UTC m=+0.126906802 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:28:11 compute-0 podman[256628]: 2025-12-03 01:28:11.109952759 +0000 UTC m=+0.147143949 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:28:11 compute-0 podman[256631]: 2025-12-03 01:28:11.127123533 +0000 UTC m=+0.144117728 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec  3 01:28:11 compute-0 python3.9[256636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:11 compute-0 python3.9[256790]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:12 compute-0 python3.9[256942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:13 compute-0 python3.9[257020]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:14 compute-0 podman[257092]: 2025-12-03 01:28:14.365172108 +0000 UTC m=+0.134852240 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:28:14 compute-0 python3.9[257195]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:15 compute-0 python3.9[257347]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.090964) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297091078, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 707, "num_deletes": 251, "total_data_size": 899073, "memory_usage": 911896, "flush_reason": "Manual Compaction"}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297102795, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 891241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8979, "largest_seqno": 9685, "table_properties": {"data_size": 887556, "index_size": 1529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7950, "raw_average_key_size": 18, "raw_value_size": 880193, "raw_average_value_size": 2051, "num_data_blocks": 71, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725237, "oldest_key_time": 1764725237, "file_creation_time": 1764725297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11922 microseconds, and 6789 cpu microseconds.
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.102875) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 891241 bytes OK
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.102913) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.105786) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.105807) EVENT_LOG_v1 {"time_micros": 1764725297105801, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.105831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 895409, prev total WAL file size 895409, number of live WAL files 2.
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.107065) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(870KB)], [23(6601KB)]
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297107159, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7650951, "oldest_snapshot_seqno": -1}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3285 keys, 6071306 bytes, temperature: kUnknown
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297159531, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6071306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6047504, "index_size": 14477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79671, "raw_average_key_size": 24, "raw_value_size": 5986185, "raw_average_value_size": 1822, "num_data_blocks": 632, "num_entries": 3285, "num_filter_entries": 3285, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.160033) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6071306 bytes
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.163046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.5 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.4 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(15.4) write-amplify(6.8) OK, records in: 3799, records dropped: 514 output_compression: NoCompression
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.163085) EVENT_LOG_v1 {"time_micros": 1764725297163065, "job": 8, "event": "compaction_finished", "compaction_time_micros": 52574, "compaction_time_cpu_micros": 31878, "output_level": 6, "num_output_files": 1, "total_output_size": 6071306, "num_input_records": 3799, "num_output_records": 3285, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297163777, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725297166605, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.106801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:28:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:28:17.166953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:28:17 compute-0 python3.9[257500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:19 compute-0 python3.9[257578]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:20 compute-0 podman[257731]: 2025-12-03 01:28:20.44128942 +0000 UTC m=+0.136261972 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0)
Dec  3 01:28:20 compute-0 python3.9[257732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:21 compute-0 python3.9[257827]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:22 compute-0 python3.9[257979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:22 compute-0 python3.9[258057]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:23 compute-0 python3.9[258209]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:25 compute-0 python3.9[258361]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:26 compute-0 podman[258486]: 2025-12-03 01:28:26.562852043 +0000 UTC m=+0.123107088 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:28:26 compute-0 python3.9[258540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:27 compute-0 python3.9[258663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725305.34203-165-148324838040167/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=218641854d443cf9f2580943ef1d852a26c0c89e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:28:28
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images']
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:28:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:29 compute-0 python3.9[258816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:29 compute-0 podman[158098]: time="2025-12-03T01:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6838 "" "Go-http-client/1.1"
Dec  3 01:28:30 compute-0 python3.9[258939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725308.8860426-165-215100664161251/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2a64f1b8009feb5d4193c68d35401643b8ae94ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:28:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:28:31 compute-0 openstack_network_exporter[160250]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:28:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:28:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:32 compute-0 python3.9[259091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:33 compute-0 python3.9[259214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725311.51834-165-103598946085168/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ffe41f9485555366da5b2c6bd47d14387ba26ee1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:34 compute-0 python3.9[259366]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:35 compute-0 python3.9[259518]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:36 compute-0 python3.9[259670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:37 compute-0 python3.9[259796]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:28:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:28:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:28:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:28:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:28:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:28:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ddadf2f6-92e5-4800-991f-a6f22c4a8c8a does not exist
Dec  3 01:28:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7d1d7bb-8346-4277-b80d-4d5a3e5361ad does not exist
Dec  3 01:28:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e8b6bbfd-6506-44be-bd4b-aced31f43a8c does not exist
Dec  3 01:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:28:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:28:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:28:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:28:38 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:28:38 compute-0 python3.9[260054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:39 compute-0 python3.9[260220]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.157864956 +0000 UTC m=+0.080091720 container create 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.12462059 +0000 UTC m=+0.046847364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:28:39 compute-0 systemd[1]: Started libpod-conmon-51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644.scope.
Dec  3 01:28:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.330166846 +0000 UTC m=+0.252393620 container init 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.346942349 +0000 UTC m=+0.269169123 container start 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.353710561 +0000 UTC m=+0.275937325 container attach 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:28:39 compute-0 admiring_elgamal[260287]: 167 167
Dec  3 01:28:39 compute-0 systemd[1]: libpod-51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644.scope: Deactivated successfully.
Dec  3 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.360310879 +0000 UTC m=+0.282537653 container died 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-57899f17c6b7756174482355245870482f8767bfe225151e168db9741eac29a7-merged.mount: Deactivated successfully.
Dec  3 01:28:39 compute-0 podman[260247]: 2025-12-03 01:28:39.44746418 +0000 UTC m=+0.369690964 container remove 51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:28:39 compute-0 systemd[1]: libpod-conmon-51a11def7af6a4fc7b76a6167b8ca5a7c225010c2d7ab3a842b329e5745c8644.scope: Deactivated successfully.
Dec  3 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.748846547 +0000 UTC m=+0.080790931 container create 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.727382414 +0000 UTC m=+0.059326878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:28:39 compute-0 systemd[1]: Started libpod-conmon-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope.
Dec  3 01:28:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.896348415 +0000 UTC m=+0.228292839 container init 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.932113096 +0000 UTC m=+0.264057490 container start 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:28:39 compute-0 podman[260374]: 2025-12-03 01:28:39.939146837 +0000 UTC m=+0.271091351 container attach 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:28:40 compute-0 python3.9[260457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:41 compute-0 python3.9[260545]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:41 compute-0 intelligent_euler[260423]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:28:41 compute-0 intelligent_euler[260423]: --> relative data size: 1.0
Dec  3 01:28:41 compute-0 intelligent_euler[260423]: --> All data devices are unavailable
Dec  3 01:28:41 compute-0 systemd[1]: libpod-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope: Deactivated successfully.
Dec  3 01:28:41 compute-0 systemd[1]: libpod-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope: Consumed 1.302s CPU time.
Dec  3 01:28:41 compute-0 podman[260374]: 2025-12-03 01:28:41.299067129 +0000 UTC m=+1.631011553 container died 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-39cbc8e0822219b0abbe701544a3557075e3e7ed1e33137cb082edc13e9600a4-merged.mount: Deactivated successfully.
Dec  3 01:28:41 compute-0 podman[260374]: 2025-12-03 01:28:41.409307941 +0000 UTC m=+1.741252335 container remove 45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_euler, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:28:41 compute-0 systemd[1]: libpod-conmon-45b77ebe2429467d3763add18ce3962327fab7705cc227bad3b73dc4d2fe01bb.scope: Deactivated successfully.
Dec  3 01:28:41 compute-0 podman[260569]: 2025-12-03 01:28:41.473645848 +0000 UTC m=+0.106952655 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec  3 01:28:41 compute-0 podman[260561]: 2025-12-03 01:28:41.492459421 +0000 UTC m=+0.135969483 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:28:41 compute-0 podman[260571]: 2025-12-03 01:28:41.515093708 +0000 UTC m=+0.155344653 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:28:41 compute-0 podman[260573]: 2025-12-03 01:28:41.517346386 +0000 UTC m=+0.148107806 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:28:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.396978933 +0000 UTC m=+0.071800532 container create 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec  3 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.370356045 +0000 UTC m=+0.045177714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:28:42 compute-0 systemd[1]: Started libpod-conmon-985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc.scope.
Dec  3 01:28:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.552240123 +0000 UTC m=+0.227061802 container init 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.568861661 +0000 UTC m=+0.243683290 container start 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.575962474 +0000 UTC m=+0.250784143 container attach 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:28:42 compute-0 fervent_lamarr[260895]: 167 167
Dec  3 01:28:42 compute-0 systemd[1]: libpod-985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc.scope: Deactivated successfully.
Dec  3 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.582454758 +0000 UTC m=+0.257276377 container died 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 01:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8967eb5eb3956829039ea588900493fb07642ec7e2293d192ef9ae74e407f589-merged.mount: Deactivated successfully.
Dec  3 01:28:42 compute-0 podman[260850]: 2025-12-03 01:28:42.665957369 +0000 UTC m=+0.340778998 container remove 985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:28:42 compute-0 systemd[1]: libpod-conmon-985846a1282d7d71a252f226a9d9a160e3a4a0ff09b347c9a4ffa668592d6ebc.scope: Deactivated successfully.
Dec  3 01:28:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:42 compute-0 podman[260985]: 2025-12-03 01:28:42.947385358 +0000 UTC m=+0.078332067 container create 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:42.917826973 +0000 UTC m=+0.048773692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:28:43 compute-0 systemd[1]: Started libpod-conmon-5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329.scope.
Dec  3 01:28:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:43 compute-0 python3.9[260992]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:43.12237492 +0000 UTC m=+0.253321609 container init 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:43.152868733 +0000 UTC m=+0.283815392 container start 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 01:28:43 compute-0 podman[260985]: 2025-12-03 01:28:43.157162832 +0000 UTC m=+0.288109541 container attach 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:28:43 compute-0 festive_margulis[261002]: {
Dec  3 01:28:43 compute-0 festive_margulis[261002]:    "0": [
Dec  3 01:28:43 compute-0 festive_margulis[261002]:        {
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "devices": [
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "/dev/loop3"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            ],
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_name": "ceph_lv0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_size": "21470642176",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "name": "ceph_lv0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "tags": {
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cluster_name": "ceph",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.crush_device_class": "",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.encrypted": "0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osd_id": "0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.type": "block",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.vdo": "0"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            },
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "type": "block",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "vg_name": "ceph_vg0"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:        }
Dec  3 01:28:43 compute-0 festive_margulis[261002]:    ],
Dec  3 01:28:43 compute-0 festive_margulis[261002]:    "1": [
Dec  3 01:28:43 compute-0 festive_margulis[261002]:        {
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "devices": [
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "/dev/loop4"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            ],
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_name": "ceph_lv1",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_size": "21470642176",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "name": "ceph_lv1",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "tags": {
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cluster_name": "ceph",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.crush_device_class": "",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.encrypted": "0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osd_id": "1",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.type": "block",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.vdo": "0"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            },
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "type": "block",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "vg_name": "ceph_vg1"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:        }
Dec  3 01:28:43 compute-0 festive_margulis[261002]:    ],
Dec  3 01:28:43 compute-0 festive_margulis[261002]:    "2": [
Dec  3 01:28:43 compute-0 festive_margulis[261002]:        {
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "devices": [
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "/dev/loop5"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            ],
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_name": "ceph_lv2",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_size": "21470642176",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "name": "ceph_lv2",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "tags": {
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.cluster_name": "ceph",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.crush_device_class": "",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.encrypted": "0",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osd_id": "2",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.type": "block",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:                "ceph.vdo": "0"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            },
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "type": "block",
Dec  3 01:28:43 compute-0 festive_margulis[261002]:            "vg_name": "ceph_vg2"
Dec  3 01:28:43 compute-0 festive_margulis[261002]:        }
Dec  3 01:28:43 compute-0 festive_margulis[261002]:    ]
Dec  3 01:28:43 compute-0 festive_margulis[261002]: }
Dec  3 01:28:44 compute-0 systemd[1]: libpod-5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329.scope: Deactivated successfully.
Dec  3 01:28:44 compute-0 podman[260985]: 2025-12-03 01:28:44.023225462 +0000 UTC m=+1.154172141 container died 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:28:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c12358a9aa47911a5c6f75334095bf68ac423d7464ecda4af45e15065121828-merged.mount: Deactivated successfully.
Dec  3 01:28:44 compute-0 podman[260985]: 2025-12-03 01:28:44.114701472 +0000 UTC m=+1.245648131 container remove 5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:28:44 compute-0 systemd[1]: libpod-conmon-5ac19363d3cb70cc04d324aa6ceb993033144ae71ce336ebdb7bf7d3f658a329.scope: Deactivated successfully.
Dec  3 01:28:44 compute-0 podman[261171]: 2025-12-03 01:28:44.516208338 +0000 UTC m=+0.103784310 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:28:44 compute-0 python3.9[261293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.209634126 +0000 UTC m=+0.068440740 container create d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 01:28:45 compute-0 systemd[1]: Started libpod-conmon-d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65.scope.
Dec  3 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.188359769 +0000 UTC m=+0.047166453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:28:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.350111314 +0000 UTC m=+0.208918028 container init d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.370471484 +0000 UTC m=+0.229278128 container start d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.37634351 +0000 UTC m=+0.235150204 container attach d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:28:45 compute-0 pensive_shaw[261418]: 167 167
Dec  3 01:28:45 compute-0 systemd[1]: libpod-d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65.scope: Deactivated successfully.
Dec  3 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.382000909 +0000 UTC m=+0.240807523 container died d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-405b6de4d6e5ae48f1caa49be21b9970b708adf099712e39406dc301f2983022-merged.mount: Deactivated successfully.
Dec  3 01:28:45 compute-0 podman[261374]: 2025-12-03 01:28:45.447433099 +0000 UTC m=+0.306239713 container remove d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 01:28:45 compute-0 systemd[1]: libpod-conmon-d9647996dc8e83b97fdb43ab2e6764e197dfce289346e384d20037fc33353d65.scope: Deactivated successfully.
Dec  3 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.692895961 +0000 UTC m=+0.083764640 container create 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 01:28:45 compute-0 systemd[1]: Started libpod-conmon-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope.
Dec  3 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.664748908 +0000 UTC m=+0.055617587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:28:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.840289486 +0000 UTC m=+0.231158145 container init 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.85176943 +0000 UTC m=+0.242638069 container start 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:28:45 compute-0 podman[261492]: 2025-12-03 01:28:45.856400128 +0000 UTC m=+0.247268797 container attach 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:28:45 compute-0 python3.9[261541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:46 compute-0 python3.9[261623]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]: {
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "osd_id": 2,
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "type": "bluestore"
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:    },
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "osd_id": 1,
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "type": "bluestore"
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:    },
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "osd_id": 0,
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:        "type": "bluestore"
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]:    }
Dec  3 01:28:46 compute-0 thirsty_zhukovsky[261536]: }
Dec  3 01:28:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:46 compute-0 podman[261492]: 2025-12-03 01:28:46.984234859 +0000 UTC m=+1.375103508 container died 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:28:46 compute-0 systemd[1]: libpod-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope: Deactivated successfully.
Dec  3 01:28:46 compute-0 systemd[1]: libpod-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope: Consumed 1.132s CPU time.
Dec  3 01:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb247ef66f415552a57e6cca0788dae2a40387e20445d740177c2b6b3b56717e-merged.mount: Deactivated successfully.
Dec  3 01:28:47 compute-0 podman[261492]: 2025-12-03 01:28:47.115746358 +0000 UTC m=+1.506615037 container remove 1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:28:47 compute-0 systemd[1]: libpod-conmon-1d2fa81ae6ce9e1c55c1a9550f38cf8eacaf8ae9ffb2cf009726c103d176025c.scope: Deactivated successfully.
Dec  3 01:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:28:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 05cb2194-d464-4d1a-af5d-d1fad4920941 does not exist
Dec  3 01:28:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8385837d-c00a-488e-b786-32c7d5de71ec does not exist
Dec  3 01:28:47 compute-0 python3.9[261864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:28:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:28:48 compute-0 python3.9[261942]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:49 compute-0 python3.9[262095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:50 compute-0 python3.9[262174]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:50 compute-0 podman[262199]: 2025-12-03 01:28:50.867878119 +0000 UTC m=+0.119688607 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  3 01:28:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:52 compute-0 python3.9[262346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:53 compute-0 python3.9[262500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:54 compute-0 python3.9[262578]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:55 compute-0 python3.9[262730]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:56 compute-0 podman[262854]: 2025-12-03 01:28:56.840607072 +0000 UTC m=+0.113925733 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:28:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:57 compute-0 python3.9[262904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:28:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:28:58 compute-0 python3.9[262983]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:28:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:28:59 compute-0 python3.9[263135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:28:59 compute-0 podman[158098]: time="2025-12-03T01:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Dec  3 01:29:00 compute-0 python3.9[263287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:29:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:29:01 compute-0 openstack_network_exporter[160250]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:29:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:29:01 compute-0 python3.9[263411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725339.894152-375-126828885757992/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:02 compute-0 python3.9[263563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:29:04 compute-0 python3.9[263716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:04 compute-0 python3.9[263794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:06 compute-0 python3.9[263946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:29:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:07 compute-0 python3.9[264098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:07 compute-0 python3.9[264176]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:09 compute-0 python3.9[264328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:29:10 compute-0 python3.9[264480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:11 compute-0 podman[264575]: 2025-12-03 01:29:11.82102513 +0000 UTC m=+0.101890378 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:29:11 compute-0 podman[264577]: 2025-12-03 01:29:11.873710318 +0000 UTC m=+0.141589345 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  3 01:29:11 compute-0 podman[264576]: 2025-12-03 01:29:11.89471846 +0000 UTC m=+0.168478145 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec  3 01:29:11 compute-0 podman[264578]: 2025-12-03 01:29:11.895375629 +0000 UTC m=+0.155033080 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Dec  3 01:29:12 compute-0 python3.9[264675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764725349.8369641-441-215053344701328/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=93ed2f21639fbbc78ab23db012b5cabf31590b1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:13 compute-0 python3.9[264844]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:29:14 compute-0 python3.9[264996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:14 compute-0 podman[265041]: 2025-12-03 01:29:14.872109064 +0000 UTC m=+0.151687424 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 01:29:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:15 compute-0 python3.9[265094]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:16 compute-0 python3.9[265247]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:29:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:17 compute-0 python3.9[265399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:18 compute-0 python3.9[265479]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:18 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec  3 01:29:18 compute-0 systemd[1]: session-50.scope: Consumed 59.257s CPU time.
Dec  3 01:29:18 compute-0 systemd-logind[800]: Session 50 logged out. Waiting for processes to exit.
Dec  3 01:29:18 compute-0 systemd-logind[800]: Removed session 50.
Dec  3 01:29:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:21 compute-0 podman[265505]: 2025-12-03 01:29:21.929414686 +0000 UTC m=+0.176660319 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 01:29:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:26 compute-0 systemd-logind[800]: New session 51 of user zuul.
Dec  3 01:29:26 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec  3 01:29:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:27 compute-0 podman[265654]: 2025-12-03 01:29:27.523177298 +0000 UTC m=+0.128006106 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:29:27 compute-0 python3.9[265696]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:29:28
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.control', '.rgw.root']
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:29:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:29 compute-0 python3.9[265855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:29 compute-0 podman[158098]: time="2025-12-03T01:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6830 "" "Go-http-client/1.1"
Dec  3 01:29:30 compute-0 python3.9[265978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725368.054308-34-42021197726591/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=085db63d611f66658452414c8f83e35d20a7cbf6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:29:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:29:31 compute-0 openstack_network_exporter[160250]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:29:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:29:31 compute-0 python3.9[266130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:32 compute-0 python3.9[266253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725370.9035518-34-228682796998500/.source.conf _original_basename=ceph.conf follow=False checksum=187519a7b5e19437fc29d35550effe70e5660ce7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:33 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec  3 01:29:33 compute-0 systemd[1]: session-51.scope: Consumed 4.872s CPU time.
Dec  3 01:29:33 compute-0 systemd-logind[800]: Session 51 logged out. Waiting for processes to exit.
Dec  3 01:29:33 compute-0 systemd-logind[800]: Removed session 51.
Dec  3 01:29:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:29:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:29:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:40 compute-0 systemd-logind[800]: New session 52 of user zuul.
Dec  3 01:29:40 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.971 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.972 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:29:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:29:41.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:29:41 compute-0 python3.9[266434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:29:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5478 writes, 23K keys, 5478 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5478 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5478 writes, 23K keys, 5478 commit groups, 1.0 writes per commit group, ingest: 18.42 MB, 0.03 MB/s#012Interval WAL: 5478 writes, 779 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 01:29:42 compute-0 podman[266515]: 2025-12-03 01:29:42.872305623 +0000 UTC m=+0.119349318 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:29:42 compute-0 podman[266516]: 2025-12-03 01:29:42.885221153 +0000 UTC m=+0.126326998 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container)
Dec  3 01:29:42 compute-0 podman[266517]: 2025-12-03 01:29:42.908758077 +0000 UTC m=+0.144967812 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:29:42 compute-0 podman[266518]: 2025-12-03 01:29:42.929368757 +0000 UTC m=+0.162622367 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 01:29:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:43 compute-0 python3.9[266671]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:29:44 compute-0 python3.9[266823]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:29:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:45 compute-0 podman[266947]: 2025-12-03 01:29:45.51447691 +0000 UTC m=+0.167104395 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  3 01:29:45 compute-0 python3.9[266989]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:29:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:47 compute-0 python3.9[267146]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  3 01:29:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:29:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:29:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6740 writes, 28K keys, 6740 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6740 writes, 1152 syncs, 5.85 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6740 writes, 28K keys, 6740 commit groups, 1.0 writes per commit group, ingest: 19.56 MB, 0.03 MB/s#012Interval WAL: 6740 writes, 1152 syncs, 5.85 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  3 01:29:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 77188929-5e1a-43fa-b4e5-c090c027db3b does not exist
Dec  3 01:29:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1ff8441e-4827-460f-ad89-cf9ee895dd67 does not exist
Dec  3 01:29:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a624c58-f189-4463-979a-56a7e035f58d does not exist
Dec  3 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:29:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:29:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:29:49 compute-0 python3.9[267546]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.602897204 +0000 UTC m=+0.087465766 container create 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.572058171 +0000 UTC m=+0.056626773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:29:50 compute-0 systemd[1]: Started libpod-conmon-94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95.scope.
Dec  3 01:29:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.768624589 +0000 UTC m=+0.253193201 container init 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.786896672 +0000 UTC m=+0.271465234 container start 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.79417761 +0000 UTC m=+0.278746212 container attach 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:29:50 compute-0 condescending_tu[267733]: 167 167
Dec  3 01:29:50 compute-0 systemd[1]: libpod-94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95.scope: Deactivated successfully.
Dec  3 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.803695093 +0000 UTC m=+0.288263655 container died 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 01:29:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d409ac54cd9eb25e9484786f32ca02089076a3be4dd9b0aa3dc531419e485d-merged.mount: Deactivated successfully.
Dec  3 01:29:50 compute-0 podman[267696]: 2025-12-03 01:29:50.882318974 +0000 UTC m=+0.366887546 container remove 94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:29:50 compute-0 systemd[1]: libpod-conmon-94916ee59ea49f492fe109ca834c3de062a6bc940088b3d36ee0f28d2bc63f95.scope: Deactivated successfully.
Dec  3 01:29:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.21574915 +0000 UTC m=+0.099720316 container create b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.176191958 +0000 UTC m=+0.060163174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:29:51 compute-0 systemd[1]: Started libpod-conmon-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope.
Dec  3 01:29:51 compute-0 python3.9[267806]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:29:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.493353888 +0000 UTC m=+0.377325104 container init b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.514746261 +0000 UTC m=+0.398717437 container start b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:29:51 compute-0 podman[267812]: 2025-12-03 01:29:51.521207566 +0000 UTC m=+0.405178792 container attach b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:29:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:52 compute-0 vibrant_buck[267830]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:29:52 compute-0 vibrant_buck[267830]: --> relative data size: 1.0
Dec  3 01:29:52 compute-0 vibrant_buck[267830]: --> All data devices are unavailable
Dec  3 01:29:52 compute-0 systemd[1]: libpod-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope: Deactivated successfully.
Dec  3 01:29:52 compute-0 systemd[1]: libpod-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope: Consumed 1.255s CPU time.
Dec  3 01:29:52 compute-0 podman[267812]: 2025-12-03 01:29:52.844971901 +0000 UTC m=+1.728943077 container died b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:29:52 compute-0 podman[267878]: 2025-12-03 01:29:52.868340342 +0000 UTC m=+0.116070637 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543)
Dec  3 01:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d07cff8a68612a11cb3189ac370c4b7f436286ad350c243c3a2f209b33c1bcf9-merged.mount: Deactivated successfully.
Dec  3 01:29:52 compute-0 podman[267812]: 2025-12-03 01:29:52.947616417 +0000 UTC m=+1.831587593 container remove b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 01:29:52 compute-0 systemd[1]: libpod-conmon-b8f3d625d837cd1f6a08f910e370dd930a3a1930f1784fd61faacf88b4963e4b.scope: Deactivated successfully.
Dec  3 01:29:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.039997826 +0000 UTC m=+0.086012156 container create beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.002118474 +0000 UTC m=+0.048132854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:29:54 compute-0 systemd[1]: Started libpod-conmon-beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a.scope.
Dec  3 01:29:54 compute-0 python3.9[268177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 01:29:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.222058621 +0000 UTC m=+0.268072991 container init beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.239810024 +0000 UTC m=+0.285824344 container start beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.246475443 +0000 UTC m=+0.292489813 container attach beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:29:54 compute-0 optimistic_jepsen[268196]: 167 167
Dec  3 01:29:54 compute-0 systemd[1]: libpod-beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a.scope: Deactivated successfully.
Dec  3 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.250721023 +0000 UTC m=+0.296735383 container died beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcc482fe3e923d8924d99d47db184da12a06189a235f15a8baeb8bc0faa1ab9a-merged.mount: Deactivated successfully.
Dec  3 01:29:54 compute-0 podman[268180]: 2025-12-03 01:29:54.331984214 +0000 UTC m=+0.377998514 container remove beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jepsen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:29:54 compute-0 systemd[1]: libpod-conmon-beb5a83444ab512430d81d827974b88df77aa465224cc8c79c8591ef2364c00a.scope: Deactivated successfully.
Dec  3 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.606866767 +0000 UTC m=+0.085362408 container create 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.573958885 +0000 UTC m=+0.052454616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:29:54 compute-0 systemd[1]: Started libpod-conmon-6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e.scope.
Dec  3 01:29:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.746789429 +0000 UTC m=+0.225285110 container init 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.772664242 +0000 UTC m=+0.251159873 container start 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:29:54 compute-0 podman[268246]: 2025-12-03 01:29:54.780617147 +0000 UTC m=+0.259112828 container attach 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:29:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]: {
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:    "0": [
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:        {
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "devices": [
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "/dev/loop3"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            ],
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_name": "ceph_lv0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_size": "21470642176",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "name": "ceph_lv0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "tags": {
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cluster_name": "ceph",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.crush_device_class": "",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.encrypted": "0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osd_id": "0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.type": "block",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.vdo": "0"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            },
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "type": "block",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "vg_name": "ceph_vg0"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:        }
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:    ],
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:    "1": [
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:        {
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "devices": [
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "/dev/loop4"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            ],
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_name": "ceph_lv1",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_size": "21470642176",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "name": "ceph_lv1",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "tags": {
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cluster_name": "ceph",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.crush_device_class": "",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.encrypted": "0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osd_id": "1",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.type": "block",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.vdo": "0"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            },
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "type": "block",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "vg_name": "ceph_vg1"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:        }
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:    ],
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:    "2": [
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:        {
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "devices": [
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "/dev/loop5"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            ],
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_name": "ceph_lv2",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_size": "21470642176",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "name": "ceph_lv2",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "tags": {
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.cluster_name": "ceph",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.crush_device_class": "",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.encrypted": "0",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osd_id": "2",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.type": "block",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:                "ceph.vdo": "0"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            },
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "type": "block",
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:            "vg_name": "ceph_vg2"
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:        }
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]:    ]
Dec  3 01:29:55 compute-0 vibrant_hamilton[268289]: }
Dec  3 01:29:55 compute-0 python3[268394]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  3 01:29:55 compute-0 systemd[1]: libpod-6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e.scope: Deactivated successfully.
Dec  3 01:29:55 compute-0 podman[268399]: 2025-12-03 01:29:55.737641824 +0000 UTC m=+0.084134843 container died 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5184cd34f40a2da13f157a3f26fbfd8185e5d252de45bfa48b112f590008ed7a-merged.mount: Deactivated successfully.
Dec  3 01:29:55 compute-0 podman[268399]: 2025-12-03 01:29:55.838951263 +0000 UTC m=+0.185444282 container remove 6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:29:55 compute-0 systemd[1]: libpod-conmon-6248ff61efa8196ae6958d46a0afe006756bc302be0ec52f32746370393c1a0e.scope: Deactivated successfully.
Dec  3 01:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5531 writes, 24K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5531 writes, 820 syncs, 6.75 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5531 writes, 24K keys, 5531 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5531 writes, 820 syncs, 6.75 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 01:29:56 compute-0 python3.9[268663]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.055972572 +0000 UTC m=+0.085889113 container create 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:29:57 compute-0 systemd[1]: Started libpod-conmon-593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a.scope.
Dec  3 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.023218684 +0000 UTC m=+0.053135315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:29:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:29:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.183549414 +0000 UTC m=+0.213465965 container init 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec  3 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.196800019 +0000 UTC m=+0.226716610 container start 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.202603214 +0000 UTC m=+0.232519805 container attach 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:29:57 compute-0 gallant_mclaren[268756]: 167 167
Dec  3 01:29:57 compute-0 systemd[1]: libpod-593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a.scope: Deactivated successfully.
Dec  3 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.204984781 +0000 UTC m=+0.234901352 container died 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f03423c4dba93152571e4add6636579ce1afe19dc074f12b43ae158226579795-merged.mount: Deactivated successfully.
Dec  3 01:29:57 compute-0 podman[268728]: 2025-12-03 01:29:57.255687027 +0000 UTC m=+0.285603588 container remove 593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:29:57 compute-0 systemd[1]: libpod-conmon-593dce56af5c078d4fec12ec49ef16d619f3d529cf29f4e10ee8e5e72c13760a.scope: Deactivated successfully.
Dec  3 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.487623714 +0000 UTC m=+0.091300186 container create 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.450232245 +0000 UTC m=+0.053908747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:29:57 compute-0 systemd[1]: Started libpod-conmon-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope.
Dec  3 01:29:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.658333357 +0000 UTC m=+0.262009849 container init 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.690331894 +0000 UTC m=+0.294008336 container start 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 01:29:57 compute-0 podman[268819]: 2025-12-03 01:29:57.694827841 +0000 UTC m=+0.298504283 container attach 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:29:57 compute-0 podman[268857]: 2025-12-03 01:29:57.722162935 +0000 UTC m=+0.117721244 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:29:58 compute-0 python3.9[268935]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:29:58 compute-0 focused_blackburn[268858]: {
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "osd_id": 2,
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "type": "bluestore"
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:    },
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "osd_id": 1,
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "type": "bluestore"
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:    },
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "osd_id": 0,
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:        "type": "bluestore"
Dec  3 01:29:58 compute-0 focused_blackburn[268858]:    }
Dec  3 01:29:58 compute-0 focused_blackburn[268858]: }
Dec  3 01:29:58 compute-0 python3.9[269027]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:29:58 compute-0 systemd[1]: libpod-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope: Deactivated successfully.
Dec  3 01:29:58 compute-0 systemd[1]: libpod-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope: Consumed 1.159s CPU time.
Dec  3 01:29:58 compute-0 podman[269042]: 2025-12-03 01:29:58.949347432 +0000 UTC m=+0.069642713 container died 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 01:29:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b18413666fb03b3ef04bc2e27e84355fb096e1c5c1129102b9dbfd52aa4c8c0b-merged.mount: Deactivated successfully.
Dec  3 01:29:59 compute-0 podman[269042]: 2025-12-03 01:29:59.038004893 +0000 UTC m=+0.158300084 container remove 14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:29:59 compute-0 systemd[1]: libpod-conmon-14a666816107010bfd3dfb0b0281263136b2846f746312fcd872af5418411383.scope: Deactivated successfully.
Dec  3 01:29:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:29:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:29:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:29:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f64b5e18-8bed-4c71-97c8-1e4312658062 does not exist
Dec  3 01:29:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 714842c2-68c1-4062-869d-f261ed6a0f2f does not exist
Dec  3 01:29:59 compute-0 podman[158098]: time="2025-12-03T01:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6832 "" "Go-http-client/1.1"
Dec  3 01:30:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:30:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:30:00 compute-0 python3.9[269259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:01 compute-0 python3.9[269338]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.69n1sqmr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:30:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:30:01 compute-0 openstack_network_exporter[160250]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:30:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:30:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:03 compute-0 python3.9[269490]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:04 compute-0 python3.9[269568]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:05 compute-0 python3.9[269720]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:30:06 compute-0 python3[269873]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 01:30:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:07 compute-0 python3.9[270025]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:08 compute-0 python3.9[270103]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:09 compute-0 python3.9[270255]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:10 compute-0 python3.9[270333]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:11 compute-0 python3.9[270486]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:12 compute-0 python3.9[270564]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:13 compute-0 podman[270698]: 2025-12-03 01:30:13.820252924 +0000 UTC m=+0.103522922 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:30:13 compute-0 podman[270692]: 2025-12-03 01:30:13.823856666 +0000 UTC m=+0.108179284 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:30:13 compute-0 podman[270696]: 2025-12-03 01:30:13.845147059 +0000 UTC m=+0.133533972 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 01:30:13 compute-0 podman[270701]: 2025-12-03 01:30:13.882372993 +0000 UTC m=+0.153741624 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 01:30:13 compute-0 python3.9[270792]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:14 compute-0 python3.9[270884]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:15 compute-0 podman[270936]: 2025-12-03 01:30:15.829100854 +0000 UTC m=+0.085748589 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:30:16 compute-0 python3.9[271054]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:17 compute-0 python3.9[271132]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:18 compute-0 python3.9[271284]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:30:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:19 compute-0 python3.9[271440]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:21 compute-0 python3.9[271594]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:30:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:22 compute-0 python3.9[271749]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:30:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:23 compute-0 podman[271874]: 2025-12-03 01:30:23.170203353 +0000 UTC m=+0.133588494 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, name=ubi9, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 01:30:23 compute-0 python3.9[271922]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:25 compute-0 python3.9[272073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:30:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:27 compute-0 python3.9[272226]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:30:27 compute-0 ovs-vsctl[272227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:30:28
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:30:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:30:28 compute-0 podman[272327]: 2025-12-03 01:30:28.856118847 +0000 UTC m=+0.109300016 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:30:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:29 compute-0 python3.9[272402]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:30:29 compute-0 podman[158098]: time="2025-12-03T01:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6838 "" "Go-http-client/1.1"
Dec  3 01:30:30 compute-0 python3.9[272556]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:30:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:30:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:30:31 compute-0 openstack_network_exporter[160250]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:30:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:30:31 compute-0 python3.9[272710]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:30:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:32 compute-0 python3.9[272863]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:33 compute-0 python3.9[272941]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:30:34 compute-0 python3.9[273094]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:35 compute-0 python3.9[273172]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:30:36 compute-0 python3.9[273324]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:37 compute-0 python3.9[273476]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:30:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:30:38 compute-0 python3.9[273554]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:39 compute-0 python3.9[273706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:40 compute-0 python3.9[273784]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:41 compute-0 python3.9[273936]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:30:41 compute-0 systemd[1]: Reloading.
Dec  3 01:30:41 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:30:41 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:30:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:44 compute-0 podman[274125]: 2025-12-03 01:30:44.060702154 +0000 UTC m=+0.106336122 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:30:44 compute-0 podman[274126]: 2025-12-03 01:30:44.072453327 +0000 UTC m=+0.117299992 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  3 01:30:44 compute-0 podman[274127]: 2025-12-03 01:30:44.104401152 +0000 UTC m=+0.141771045 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:30:44 compute-0 podman[274128]: 2025-12-03 01:30:44.13543718 +0000 UTC m=+0.167207155 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 01:30:44 compute-0 python3.9[274139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:44 compute-0 python3.9[274288]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:45 compute-0 python3.9[274441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:46 compute-0 podman[274491]: 2025-12-03 01:30:46.531864195 +0000 UTC m=+0.136238029 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:30:46 compute-0 python3.9[274540]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:47 compute-0 python3.9[274692]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:30:48 compute-0 systemd[1]: Reloading.
Dec  3 01:30:48 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:30:48 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:30:48 compute-0 systemd[1]: Starting Create netns directory...
Dec  3 01:30:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 01:30:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 01:30:48 compute-0 systemd[1]: Finished Create netns directory.
Dec  3 01:30:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:49 compute-0 python3.9[274888]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:30:50 compute-0 python3.9[275040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:51 compute-0 python3.9[275118]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:30:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:53 compute-0 python3.9[275270]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:30:53 compute-0 podman[275353]: 2025-12-03 01:30:53.885971002 +0000 UTC m=+0.136203087 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:30:54 compute-0 python3.9[275441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:30:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:55 compute-0 python3.9[275519]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.pqao_6ru recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:56 compute-0 python3.9[275672]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:30:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:30:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:30:59 compute-0 podman[275875]: 2025-12-03 01:30:59.368373453 +0000 UTC m=+0.127273102 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:30:59 compute-0 podman[158098]: time="2025-12-03T01:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6841 "" "Go-http-client/1.1"
Dec  3 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:31:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 69677d6e-f16b-4c07-8ca0-9ecabf412c00 does not exist
Dec  3 01:31:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 42b6a4a7-9570-476e-91d3-143310c2e712 does not exist
Dec  3 01:31:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a30475a-429a-445c-bd7e-d65a6f7af71a does not exist
Dec  3 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:31:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:31:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:31:00 compute-0 python3.9[276210]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  3 01:31:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:31:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:31:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:31:01 compute-0 openstack_network_exporter[160250]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:31:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.662856017 +0000 UTC m=+0.090648867 container create 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.625682146 +0000 UTC m=+0.053475056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:31:01 compute-0 systemd[1]: Started libpod-conmon-79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8.scope.
Dec  3 01:31:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.805483227 +0000 UTC m=+0.233276127 container init 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.822591743 +0000 UTC m=+0.250384603 container start 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.829162418 +0000 UTC m=+0.256955308 container attach 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:31:01 compute-0 reverent_yalow[276465]: 167 167
Dec  3 01:31:01 compute-0 systemd[1]: libpod-79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8.scope: Deactivated successfully.
Dec  3 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.834799058 +0000 UTC m=+0.262591908 container died 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9c83708454e117e8bedad08339b6c808fff6a87a89ea0765d347c7733e69cb4-merged.mount: Deactivated successfully.
Dec  3 01:31:01 compute-0 podman[276424]: 2025-12-03 01:31:01.918319433 +0000 UTC m=+0.346112253 container remove 79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:31:01 compute-0 systemd[1]: libpod-conmon-79c855ba43ef4a8eb72cfed48b1f479ccb198b8a624544e553114125965bf2f8.scope: Deactivated successfully.
Dec  3 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.161843772 +0000 UTC m=+0.093149013 container create ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:31:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:02 compute-0 python3.9[276533]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.123811358 +0000 UTC m=+0.055116689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:31:02 compute-0 systemd[1]: Started libpod-conmon-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope.
Dec  3 01:31:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.333613538 +0000 UTC m=+0.264918889 container init ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.361684766 +0000 UTC m=+0.292990047 container start ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:31:02 compute-0 podman[276539]: 2025-12-03 01:31:02.36858735 +0000 UTC m=+0.299892631 container attach ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:31:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:03 compute-0 python3.9[276724]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 01:31:03 compute-0 kind_turing[276568]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:31:03 compute-0 kind_turing[276568]: --> relative data size: 1.0
Dec  3 01:31:03 compute-0 kind_turing[276568]: --> All data devices are unavailable
Dec  3 01:31:03 compute-0 systemd[1]: libpod-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope: Deactivated successfully.
Dec  3 01:31:03 compute-0 podman[276539]: 2025-12-03 01:31:03.655823767 +0000 UTC m=+1.587129038 container died ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:31:03 compute-0 systemd[1]: libpod-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope: Consumed 1.225s CPU time.
Dec  3 01:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b122c34f8ff908f5d5311ffb8984d4b73023cbcb34b258bf3a89d2c4c2c8b37-merged.mount: Deactivated successfully.
Dec  3 01:31:03 compute-0 podman[276539]: 2025-12-03 01:31:03.758835602 +0000 UTC m=+1.690140843 container remove ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_turing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:31:03 compute-0 systemd[1]: libpod-conmon-ae6f2ed10921d050d0dc15ce21ce9404c903cd392bf726ede37f2d4cd2f841c6.scope: Deactivated successfully.
Dec  3 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.777581485 +0000 UTC m=+0.068169528 container create 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:31:04 compute-0 systemd[1]: Started libpod-conmon-5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677.scope.
Dec  3 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.749479316 +0000 UTC m=+0.040067359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:31:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.911593575 +0000 UTC m=+0.202181668 container init 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.928574338 +0000 UTC m=+0.219162371 container start 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.935092651 +0000 UTC m=+0.225680724 container attach 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:31:04 compute-0 musing_feistel[277081]: 167 167
Dec  3 01:31:04 compute-0 systemd[1]: libpod-5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677.scope: Deactivated successfully.
Dec  3 01:31:04 compute-0 podman[277046]: 2025-12-03 01:31:04.940036053 +0000 UTC m=+0.230624096 container died 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b4978e871c91f75b6f76ae56950e000897812dbeb2fe900d04d5f0a1df1d59-merged.mount: Deactivated successfully.
Dec  3 01:31:05 compute-0 podman[277046]: 2025-12-03 01:31:05.020789135 +0000 UTC m=+0.311377178 container remove 5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:31:05 compute-0 systemd[1]: libpod-conmon-5b80459c7b89c4c88ea8ecd2e82ec05bdaec669d96c7b9f1cd2aea8a94771677.scope: Deactivated successfully.
Dec  3 01:31:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.278717607 +0000 UTC m=+0.074663000 container create e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.2472938 +0000 UTC m=+0.043239193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:31:05 compute-0 systemd[1]: Started libpod-conmon-e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b.scope.
Dec  3 01:31:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.462094153 +0000 UTC m=+0.258039526 container init e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.488681501 +0000 UTC m=+0.284626864 container start e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 01:31:05 compute-0 podman[277141]: 2025-12-03 01:31:05.493354096 +0000 UTC m=+0.289299459 container attach e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:31:06 compute-0 elastic_einstein[277164]: {
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:    "0": [
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:        {
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "devices": [
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "/dev/loop3"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            ],
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_name": "ceph_lv0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_size": "21470642176",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "name": "ceph_lv0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "tags": {
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cluster_name": "ceph",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.crush_device_class": "",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.encrypted": "0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osd_id": "0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.type": "block",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.vdo": "0"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            },
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "type": "block",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "vg_name": "ceph_vg0"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:        }
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:    ],
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:    "1": [
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:        {
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "devices": [
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "/dev/loop4"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            ],
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_name": "ceph_lv1",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_size": "21470642176",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "name": "ceph_lv1",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "tags": {
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cluster_name": "ceph",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.crush_device_class": "",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.encrypted": "0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osd_id": "1",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.type": "block",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.vdo": "0"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            },
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "type": "block",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "vg_name": "ceph_vg1"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:        }
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:    ],
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:    "2": [
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:        {
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "devices": [
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "/dev/loop5"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            ],
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_name": "ceph_lv2",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_size": "21470642176",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "name": "ceph_lv2",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "tags": {
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.cluster_name": "ceph",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.crush_device_class": "",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.encrypted": "0",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osd_id": "2",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.type": "block",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:                "ceph.vdo": "0"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            },
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "type": "block",
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:            "vg_name": "ceph_vg2"
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:        }
Dec  3 01:31:06 compute-0 elastic_einstein[277164]:    ]
Dec  3 01:31:06 compute-0 elastic_einstein[277164]: }
Dec  3 01:31:06 compute-0 systemd[1]: libpod-e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b.scope: Deactivated successfully.
Dec  3 01:31:06 compute-0 podman[277141]: 2025-12-03 01:31:06.344663688 +0000 UTC m=+1.140609071 container died e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 01:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a14c5cd4d3f990529d0796d72271d46228f6a2290449954651257174011383f-merged.mount: Deactivated successfully.
Dec  3 01:31:06 compute-0 podman[277141]: 2025-12-03 01:31:06.467042229 +0000 UTC m=+1.262987602 container remove e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:31:06 compute-0 systemd[1]: libpod-conmon-e12150af361d2212dad414058aff9abba5de5587d0ad0cc0c8052d8ccb98570b.scope: Deactivated successfully.
Dec  3 01:31:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:07 compute-0 python3[277434]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.616374382 +0000 UTC m=+0.068084705 container create c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.590827681 +0000 UTC m=+0.042538004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:31:07 compute-0 systemd[1]: Started libpod-conmon-c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae.scope.
Dec  3 01:31:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:31:07 compute-0 python3[277434]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",#012          "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-12-01T06:38:47.246477714Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251125",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 345722821,#012          "VirtualSize": 345722821,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012                    "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012                    "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",#012                    "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251125",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-25T04:02:36.223494528Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:36.223562059Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:39.054452717Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025707917Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025744608Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025767729Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025791379Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.02581523Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025867611Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.469442331Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:10:02.029095017Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:10:05.672474685Z",#012                    "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l
Dec  3 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.746732445 +0000 UTC m=+0.198442768 container init c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.764724914 +0000 UTC m=+0.216435227 container start c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.772025229 +0000 UTC m=+0.223735562 container attach c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 01:31:07 compute-0 magical_sammet[277526]: 167 167
Dec  3 01:31:07 compute-0 systemd[1]: libpod-c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae.scope: Deactivated successfully.
Dec  3 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.777084784 +0000 UTC m=+0.228795077 container died c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-adc70281d91a89ddfd042c66ca4fa4783881778f0e5d97ee4913ff7ca6043d7e-merged.mount: Deactivated successfully.
Dec  3 01:31:07 compute-0 podman[277492]: 2025-12-03 01:31:07.843267817 +0000 UTC m=+0.294978140 container remove c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:31:07 compute-0 systemd[1]: libpod-conmon-c451890e566faf74ae52c7747ef67674468c5bf953f1ca804e151b15c4ab38ae.scope: Deactivated successfully.
Dec  3 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.086980711 +0000 UTC m=+0.072736089 container create ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.052246875 +0000 UTC m=+0.038002293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:31:08 compute-0 systemd[1]: Started libpod-conmon-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope.
Dec  3 01:31:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.239216397 +0000 UTC m=+0.224971815 container init ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.259756083 +0000 UTC m=+0.245511451 container start ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:31:08 compute-0 podman[277588]: 2025-12-03 01:31:08.268203888 +0000 UTC m=+0.253959306 container attach ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:31:09 compute-0 python3.9[277738]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:31:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]: {
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "osd_id": 2,
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "type": "bluestore"
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:    },
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "osd_id": 1,
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "type": "bluestore"
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:    },
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "osd_id": 0,
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:        "type": "bluestore"
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]:    }
Dec  3 01:31:09 compute-0 suspicious_swartz[277624]: }
Dec  3 01:31:09 compute-0 systemd[1]: libpod-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope: Deactivated successfully.
Dec  3 01:31:09 compute-0 systemd[1]: libpod-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope: Consumed 1.140s CPU time.
Dec  3 01:31:09 compute-0 podman[277588]: 2025-12-03 01:31:09.412514977 +0000 UTC m=+1.398270345 container died ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bccc071674e55c48806889f98a465899c03576fa441d65d0e3c7fa7eb0cc49b7-merged.mount: Deactivated successfully.
Dec  3 01:31:09 compute-0 podman[277588]: 2025-12-03 01:31:09.525615301 +0000 UTC m=+1.511370639 container remove ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:31:09 compute-0 systemd[1]: libpod-conmon-ad08a1a02f4950134d53e94f2cf7b4355986c1c3624c7b352ffb340c39de5f27.scope: Deactivated successfully.
Dec  3 01:31:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:31:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:31:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:31:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:31:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f3aeb102-e175-4a75-9408-22b1a0a3b892 does not exist
Dec  3 01:31:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 580e1067-00a7-4518-909d-3b3508a6e780 does not exist
Dec  3 01:31:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:31:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:31:10 compute-0 python3.9[277983]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:31:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:11 compute-0 python3.9[278059]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:31:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:13 compute-0 python3.9[278210]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764725472.053399-536-212012165253571/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:31:14 compute-0 python3.9[278286]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:31:14 compute-0 podman[278288]: 2025-12-03 01:31:14.574818482 +0000 UTC m=+0.114439291 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:31:14 compute-0 podman[278290]: 2025-12-03 01:31:14.575163851 +0000 UTC m=+0.109209381 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  3 01:31:14 compute-0 podman[278289]: 2025-12-03 01:31:14.575992053 +0000 UTC m=+0.114679127 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:31:14 compute-0 podman[278291]: 2025-12-03 01:31:14.6344366 +0000 UTC m=+0.174174542 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:31:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:15 compute-0 python3.9[278521]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:31:15 compute-0 ovs-vsctl[278522]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  3 01:31:16 compute-0 podman[278674]: 2025-12-03 01:31:16.70922439 +0000 UTC m=+0.113456043 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 01:31:16 compute-0 python3.9[278675]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:31:16 compute-0 ovs-vsctl[278694]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  3 01:31:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:18 compute-0 python3.9[278847]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:31:18 compute-0 ovs-vsctl[278848]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  3 01:31:18 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec  3 01:31:18 compute-0 systemd[1]: session-52.scope: Consumed 1min 14.859s CPU time.
Dec  3 01:31:18 compute-0 systemd-logind[800]: Session 52 logged out. Waiting for processes to exit.
Dec  3 01:31:18 compute-0 systemd-logind[800]: Removed session 52.
Dec  3 01:31:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec  3 01:31:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec  3 01:31:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Dec  3 01:31:24 compute-0 systemd-logind[800]: New session 53 of user zuul.
Dec  3 01:31:24 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec  3 01:31:24 compute-0 podman[278880]: 2025-12-03 01:31:24.707297318 +0000 UTC m=+0.134670369 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9)
Dec  3 01:31:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec  3 01:31:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:31:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:27 compute-0 python3.9[279054]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:31:28
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.meta']
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:31:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:31:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:31:29 compute-0 python3.9[279212]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:29 compute-0 podman[158098]: time="2025-12-03T01:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6845 "" "Go-http-client/1.1"
Dec  3 01:31:29 compute-0 podman[279260]: 2025-12-03 01:31:29.880201173 +0000 UTC m=+0.123144342 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:31:30 compute-0 python3.9[279388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:31:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:31:31 compute-0 openstack_network_exporter[160250]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:31:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:31:31 compute-0 python3.9[279541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:32 compute-0 python3.9[279694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec  3 01:31:33 compute-0 python3.9[279846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Dec  3 01:31:35 compute-0 python3.9[279996]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:31:36 compute-0 python3.9[280148]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec  3 01:31:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:31:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:31:38 compute-0 python3.9[280298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:31:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:40 compute-0 python3.9[280419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725497.3927703-86-267942322831429/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.971 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.972 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:31:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:31:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:42 compute-0 python3.9[280570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:31:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:43 compute-0 python3.9[280692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725500.620203-101-179203207798632/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:44 compute-0 python3.9[280844]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 01:31:44 compute-0 podman[280854]: 2025-12-03 01:31:44.836605008 +0000 UTC m=+0.110482854 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Dec  3 01:31:44 compute-0 podman[280855]: 2025-12-03 01:31:44.848210268 +0000 UTC m=+0.109297903 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 01:31:44 compute-0 podman[280853]: 2025-12-03 01:31:44.861568514 +0000 UTC m=+0.137161316 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:31:44 compute-0 podman[280856]: 2025-12-03 01:31:44.876231714 +0000 UTC m=+0.130877788 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:31:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:45 compute-0 python3.9[281018]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:31:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:47 compute-0 podman[281096]: 2025-12-03 01:31:47.874837249 +0000 UTC m=+0.130735085 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 01:31:48 compute-0 python3.9[281190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 01:31:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:49 compute-0 python3.9[281344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:31:50 compute-0 python3.9[281465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725509.0543349-138-2978533307805/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:52 compute-0 python3.9[281615]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:31:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:54 compute-0 python3.9[281736]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725511.8342652-138-149461265406231/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:55 compute-0 podman[281861]: 2025-12-03 01:31:55.716925919 +0000 UTC m=+0.123244275 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release=1214.1726694543, io.openshift.expose-services=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64)
Dec  3 01:31:55 compute-0 python3.9[281905]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:31:56 compute-0 python3.9[282029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725515.104071-182-164662718790397/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:31:57 compute-0 python3.9[282179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:31:58 compute-0 python3.9[282300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725517.1467767-182-267736154891827/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:31:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:31:59 compute-0 python3.9[282450]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:31:59 compute-0 podman[158098]: time="2025-12-03T01:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6842 "" "Go-http-client/1.1"
Dec  3 01:32:00 compute-0 podman[282576]: 2025-12-03 01:32:00.676354057 +0000 UTC m=+0.125289849 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:32:00 compute-0 python3.9[282628]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:32:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:32:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:32:01 compute-0 openstack_network_exporter[160250]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:32:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:32:01 compute-0 python3.9[282780]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:02 compute-0 python3.9[282858]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:32:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:04 compute-0 python3.9[283010]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:05 compute-0 python3.9[283088]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:32:06 compute-0 python3.9[283241]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:07 compute-0 python3.9[283393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:08 compute-0 python3.9[283472]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:09 compute-0 python3.9[283624]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:10 compute-0 python3.9[283775]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:32:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8756678d-37a0-41be-a19a-05a9b31dd1c4 does not exist
Dec  3 01:32:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ecf32252-2cf3-4dc1-893a-0fe560a4b890 does not exist
Dec  3 01:32:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a9764b25-acf4-460d-a091-e3a99cb72c41 does not exist
Dec  3 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:32:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:32:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:32:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:32:11 compute-0 python3.9[284007]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:32:11 compute-0 systemd[1]: Reloading.
Dec  3 01:32:11 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:32:11 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:32:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.253472) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532253522, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3473905, "memory_usage": 3536360, "flush_reason": "Manual Compaction"}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532279449, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3409130, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9686, "largest_seqno": 11725, "table_properties": {"data_size": 3399866, "index_size": 5886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17858, "raw_average_key_size": 19, "raw_value_size": 3381497, "raw_average_value_size": 3687, "num_data_blocks": 267, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725298, "oldest_key_time": 1764725298, "file_creation_time": 1764725532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 26011 microseconds, and 14058 cpu microseconds.
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.279490) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3409130 bytes OK
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.279507) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282017) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282033) EVENT_LOG_v1 {"time_micros": 1764725532282028, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282050) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3465398, prev total WAL file size 3465398, number of live WAL files 2.
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.283115) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3329KB)], [26(5929KB)]
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532283192, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9480436, "oldest_snapshot_seqno": -1}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3688 keys, 7829534 bytes, temperature: kUnknown
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532331457, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7829534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7801305, "index_size": 17879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88604, "raw_average_key_size": 24, "raw_value_size": 7731152, "raw_average_value_size": 2096, "num_data_blocks": 775, "num_entries": 3688, "num_filter_entries": 3688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.331749) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7829534 bytes
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.334052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 161.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4202, records dropped: 514 output_compression: NoCompression
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.334075) EVENT_LOG_v1 {"time_micros": 1764725532334065, "job": 10, "event": "compaction_finished", "compaction_time_micros": 48391, "compaction_time_cpu_micros": 24454, "output_level": 6, "num_output_files": 1, "total_output_size": 7829534, "num_input_records": 4202, "num_output_records": 3688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532334945, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725532336265, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.282993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:32:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:32:12.336496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.566216702 +0000 UTC m=+0.069116687 container create bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:32:12 compute-0 systemd[1]: Started libpod-conmon-bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a.scope.
Dec  3 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.542592428 +0000 UTC m=+0.045492453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:32:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.677924381 +0000 UTC m=+0.180824386 container init bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.68889388 +0000 UTC m=+0.191793875 container start bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.69293512 +0000 UTC m=+0.195835105 container attach bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:32:12 compute-0 awesome_almeida[284275]: 167 167
Dec  3 01:32:12 compute-0 systemd[1]: libpod-bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a.scope: Deactivated successfully.
Dec  3 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.710028036 +0000 UTC m=+0.212928031 container died bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7525c70556dfba5e5a7515b14eb1058dabc07ad9612023d50358e06ef62a25d8-merged.mount: Deactivated successfully.
Dec  3 01:32:12 compute-0 podman[284237]: 2025-12-03 01:32:12.781031474 +0000 UTC m=+0.283931459 container remove bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:32:12 compute-0 systemd[1]: libpod-conmon-bd20eb856cc9a0c1d9a7558c284591c23dd31326acc53870de04710f88ac314a.scope: Deactivated successfully.
Dec  3 01:32:13 compute-0 python3.9[284344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.026764081 +0000 UTC m=+0.088314211 container create 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:12.99409668 +0000 UTC m=+0.055646860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:32:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:13 compute-0 systemd[1]: Started libpod-conmon-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope.
Dec  3 01:32:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.213255531 +0000 UTC m=+0.274805701 container init 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.242243643 +0000 UTC m=+0.303793813 container start 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:32:13 compute-0 podman[284350]: 2025-12-03 01:32:13.24946985 +0000 UTC m=+0.311020020 container attach 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 01:32:13 compute-0 python3.9[284447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:14 compute-0 goofy_kilby[284373]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:32:14 compute-0 goofy_kilby[284373]: --> relative data size: 1.0
Dec  3 01:32:14 compute-0 goofy_kilby[284373]: --> All data devices are unavailable
Dec  3 01:32:14 compute-0 systemd[1]: libpod-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope: Deactivated successfully.
Dec  3 01:32:14 compute-0 podman[284350]: 2025-12-03 01:32:14.566948208 +0000 UTC m=+1.628498398 container died 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:32:14 compute-0 systemd[1]: libpod-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope: Consumed 1.267s CPU time.
Dec  3 01:32:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e63698bf649c38f9acf7da98c85cf6a09f407cea3d73c43215a939264a86d7f-merged.mount: Deactivated successfully.
Dec  3 01:32:14 compute-0 podman[284350]: 2025-12-03 01:32:14.676008945 +0000 UTC m=+1.737559085 container remove 7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:32:14 compute-0 systemd[1]: libpod-conmon-7e8586cd14d4f17e2ce8cc2f77ef911bc1053fd143e19c60a9c6e8b0702bcb85.scope: Deactivated successfully.
Dec  3 01:32:14 compute-0 python3.9[284623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:15 compute-0 podman[284688]: 2025-12-03 01:32:15.135870436 +0000 UTC m=+0.112867011 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:32:15 compute-0 podman[284689]: 2025-12-03 01:32:15.146561378 +0000 UTC m=+0.118511056 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64)
Dec  3 01:32:15 compute-0 podman[284690]: 2025-12-03 01:32:15.168685832 +0000 UTC m=+0.138641485 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  3 01:32:15 compute-0 podman[284691]: 2025-12-03 01:32:15.192032099 +0000 UTC m=+0.144108814 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller)
Dec  3 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.773094258 +0000 UTC m=+0.098577441 container create 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.745084374 +0000 UTC m=+0.070567547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:32:15 compute-0 systemd[1]: Started libpod-conmon-68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427.scope.
Dec  3 01:32:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.928065988 +0000 UTC m=+0.253549211 container init 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.943681314 +0000 UTC m=+0.269164487 container start 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.949796141 +0000 UTC m=+0.275279304 container attach 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:32:15 compute-0 affectionate_meninsky[284899]: 167 167
Dec  3 01:32:15 compute-0 systemd[1]: libpod-68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427.scope: Deactivated successfully.
Dec  3 01:32:15 compute-0 podman[284859]: 2025-12-03 01:32:15.956320589 +0000 UTC m=+0.281803762 container died 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-08c87877285bc1092a340dacfb54354b5446b76748e4df13d0fc70cf407f6fd1-merged.mount: Deactivated successfully.
Dec  3 01:32:16 compute-0 podman[284859]: 2025-12-03 01:32:16.034250666 +0000 UTC m=+0.359733839 container remove 68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:32:16 compute-0 systemd[1]: libpod-conmon-68ab8fd824d1a2a113a3fbea7b94894614c59fd302c4eda872fc16acd9cc2427.scope: Deactivated successfully.
Dec  3 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.298135768 +0000 UTC m=+0.106508487 container create 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.236344312 +0000 UTC m=+0.044717101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:32:16 compute-0 python3.9[284969]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:16 compute-0 systemd[1]: Started libpod-conmon-15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680.scope.
Dec  3 01:32:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.50194944 +0000 UTC m=+0.310322209 container init 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.519228112 +0000 UTC m=+0.327600831 container start 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:32:16 compute-0 podman[284975]: 2025-12-03 01:32:16.525991317 +0000 UTC m=+0.334364086 container attach 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:32:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]: {
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:    "0": [
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:        {
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "devices": [
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "/dev/loop3"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            ],
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_name": "ceph_lv0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_size": "21470642176",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "name": "ceph_lv0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "tags": {
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cluster_name": "ceph",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.crush_device_class": "",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.encrypted": "0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osd_id": "0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.type": "block",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.vdo": "0"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            },
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "type": "block",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "vg_name": "ceph_vg0"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:        }
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:    ],
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:    "1": [
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:        {
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "devices": [
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "/dev/loop4"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            ],
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_name": "ceph_lv1",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_size": "21470642176",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "name": "ceph_lv1",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "tags": {
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cluster_name": "ceph",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.crush_device_class": "",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.encrypted": "0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osd_id": "1",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.type": "block",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.vdo": "0"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            },
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "type": "block",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "vg_name": "ceph_vg1"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:        }
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:    ],
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:    "2": [
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:        {
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "devices": [
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "/dev/loop5"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            ],
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_name": "ceph_lv2",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_size": "21470642176",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "name": "ceph_lv2",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "tags": {
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.cluster_name": "ceph",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.crush_device_class": "",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.encrypted": "0",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osd_id": "2",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.type": "block",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:                "ceph.vdo": "0"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            },
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "type": "block",
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:            "vg_name": "ceph_vg2"
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:        }
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]:    ]
Dec  3 01:32:17 compute-0 relaxed_kalam[284991]: }
Dec  3 01:32:17 compute-0 systemd[1]: libpod-15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680.scope: Deactivated successfully.
Dec  3 01:32:17 compute-0 podman[284975]: 2025-12-03 01:32:17.401576404 +0000 UTC m=+1.209949093 container died 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a08ac1a890c0e2556ce3e10c40d75ff20c47cbb4904453204f7dfeeeca6e1ec0-merged.mount: Deactivated successfully.
Dec  3 01:32:17 compute-0 podman[284975]: 2025-12-03 01:32:17.486644236 +0000 UTC m=+1.295016925 container remove 15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:32:17 compute-0 systemd[1]: libpod-conmon-15b65a087d1cb883aef24c4eaa9f9f9d3c88eb68354ddd8086009f21e197a680.scope: Deactivated successfully.
Dec  3 01:32:17 compute-0 python3.9[285152]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:32:17 compute-0 systemd[1]: Reloading.
Dec  3 01:32:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:32:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:32:18 compute-0 systemd[1]: Starting Create netns directory...
Dec  3 01:32:18 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 01:32:18 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 01:32:18 compute-0 systemd[1]: Finished Create netns directory.
Dec  3 01:32:18 compute-0 podman[285250]: 2025-12-03 01:32:18.440892301 +0000 UTC m=+0.149694987 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:32:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.104435621 +0000 UTC m=+0.075538672 container create 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.076847298 +0000 UTC m=+0.047950379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:32:19 compute-0 systemd[1]: Started libpod-conmon-06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce.scope.
Dec  3 01:32:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.251143606 +0000 UTC m=+0.222246707 container init 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.268784467 +0000 UTC m=+0.239887528 container start 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.276390505 +0000 UTC m=+0.247493606 container attach 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:32:19 compute-0 stoic_buck[285475]: 167 167
Dec  3 01:32:19 compute-0 systemd[1]: libpod-06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce.scope: Deactivated successfully.
Dec  3 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.281008861 +0000 UTC m=+0.252111912 container died 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 01:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-22612e3c41728e8a109c6f94ae26994c823015f3825580597de5f19be988745f-merged.mount: Deactivated successfully.
Dec  3 01:32:19 compute-0 podman[285438]: 2025-12-03 01:32:19.369112775 +0000 UTC m=+0.340215806 container remove 06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:32:19 compute-0 systemd[1]: libpod-conmon-06923f64dcdbd65773154e59b143628b46bf5f05d31a8b8d2d3c065c4087a9ce.scope: Deactivated successfully.
Dec  3 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.626710846 +0000 UTC m=+0.064352207 container create 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:32:19 compute-0 systemd[1]: Started libpod-conmon-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope.
Dec  3 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.604624773 +0000 UTC m=+0.042266184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:32:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:19 compute-0 python3.9[285555]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.794159286 +0000 UTC m=+0.231800667 container init 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.814773849 +0000 UTC m=+0.252415240 container start 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:32:19 compute-0 podman[285553]: 2025-12-03 01:32:19.821347238 +0000 UTC m=+0.258988649 container attach 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:32:20 compute-0 python3.9[285738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:20 compute-0 determined_satoshi[285571]: {
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "osd_id": 2,
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "type": "bluestore"
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:    },
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "osd_id": 1,
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "type": "bluestore"
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:    },
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "osd_id": 0,
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:        "type": "bluestore"
Dec  3 01:32:20 compute-0 determined_satoshi[285571]:    }
Dec  3 01:32:20 compute-0 determined_satoshi[285571]: }
Dec  3 01:32:20 compute-0 systemd[1]: libpod-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope: Deactivated successfully.
Dec  3 01:32:20 compute-0 podman[285553]: 2025-12-03 01:32:20.987865696 +0000 UTC m=+1.425507067 container died 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:32:20 compute-0 systemd[1]: libpod-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope: Consumed 1.167s CPU time.
Dec  3 01:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e6129541d7b1faf48b61e523a220e5eda5630b567c102d3e610f41adefe1d6f-merged.mount: Deactivated successfully.
Dec  3 01:32:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:21 compute-0 podman[285553]: 2025-12-03 01:32:21.09828816 +0000 UTC m=+1.535929531 container remove 6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:32:21 compute-0 systemd[1]: libpod-conmon-6990debb437cf71246448226b1d522ebbf400f13b74f16d9f424128ec38b62d6.scope: Deactivated successfully.
Dec  3 01:32:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:32:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:32:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:32:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:32:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3a699973-21f0-4b11-9c99-2851c61eec87 does not exist
Dec  3 01:32:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cbb94639-9b8d-46ef-bc73-468b37ee079f does not exist
Dec  3 01:32:21 compute-0 python3.9[285940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725540.0959888-333-178416649302138/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:32:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:32:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:32:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:23 compute-0 python3.9[286092]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:32:24 compute-0 python3.9[286244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:32:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:25 compute-0 python3.9[286367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725543.3989627-358-228609382032875/.source.json _original_basename=.fziwbwmr follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:26 compute-0 podman[286493]: 2025-12-03 01:32:26.260826033 +0000 UTC m=+0.187322784 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9)
Dec  3 01:32:26 compute-0 python3.9[286539]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:32:28
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data']
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:32:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:32:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:29 compute-0 podman[158098]: time="2025-12-03T01:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  3 01:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6843 "" "Go-http-client/1.1"
Dec  3 01:32:30 compute-0 python3.9[286971]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  3 01:32:30 compute-0 podman[286972]: 2025-12-03 01:32:30.835411677 +0000 UTC m=+0.102035976 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:32:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:32:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:32:31 compute-0 openstack_network_exporter[160250]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:32:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:32:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:32 compute-0 python3.9[287146]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 01:32:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:33 compute-0 python3.9[287298]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 01:32:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:36 compute-0 python3[287476]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:32:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:32:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:46 compute-0 podman[287550]: 2025-12-03 01:32:46.962066406 +0000 UTC m=+1.567010751 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:32:46 compute-0 podman[287551]: 2025-12-03 01:32:46.978248967 +0000 UTC m=+1.581382722 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7)
Dec  3 01:32:46 compute-0 podman[287552]: 2025-12-03 01:32:46.980442817 +0000 UTC m=+1.583064658 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:32:47 compute-0 podman[287553]: 2025-12-03 01:32:47.003917008 +0000 UTC m=+1.598253833 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:32:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:47 compute-0 podman[287490]: 2025-12-03 01:32:47.658688239 +0000 UTC m=+11.033525872 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 01:32:47 compute-0 podman[287669]: 2025-12-03 01:32:47.9500146 +0000 UTC m=+0.094617963 container create 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 01:32:47 compute-0 podman[287669]: 2025-12-03 01:32:47.902922685 +0000 UTC m=+0.047526068 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 01:32:47 compute-0 python3[287476]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 01:32:48 compute-0 podman[287801]: 2025-12-03 01:32:48.863356497 +0000 UTC m=+0.115366709 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  3 01:32:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:49 compute-0 python3.9[287877]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:32:50 compute-0 python3.9[288032]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:51 compute-0 python3.9[288108]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:32:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:52 compute-0 python3.9[288259]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764725571.4598265-446-134686505984931/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:32:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:54 compute-0 python3.9[288335]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:32:54 compute-0 systemd[1]: Reloading.
Dec  3 01:32:54 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:32:54 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:32:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:56 compute-0 python3.9[288447]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:32:56 compute-0 systemd[1]: Reloading.
Dec  3 01:32:56 compute-0 podman[288449]: 2025-12-03 01:32:56.766661945 +0000 UTC m=+0.142915342 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container)
Dec  3 01:32:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:32:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:32:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:57 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec  3 01:32:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:32:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02c71fafd679e995a36529ccc3f301be28fb64ad6b23ce21a437cb97af0b4eb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02c71fafd679e995a36529ccc3f301be28fb64ad6b23ce21a437cb97af0b4eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 01:32:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.
Dec  3 01:32:57 compute-0 podman[288508]: 2025-12-03 01:32:57.44724336 +0000 UTC m=+0.260781538 container init 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + sudo -E kolla_set_configs
Dec  3 01:32:57 compute-0 podman[288508]: 2025-12-03 01:32:57.495825976 +0000 UTC m=+0.309364154 container start 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 01:32:57 compute-0 edpm-start-podman-container[288508]: ovn_metadata_agent
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Validating config file
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Copying service configuration files
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Writing out command to execute
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: ++ cat /run_command
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + CMD=neutron-ovn-metadata-agent
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + ARGS=
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + sudo kolla_copy_cacerts
Dec  3 01:32:57 compute-0 edpm-start-podman-container[288507]: Creating additional drop-in dependency for "ovn_metadata_agent" (5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6)
Dec  3 01:32:57 compute-0 podman[288530]: 2025-12-03 01:32:57.6373803 +0000 UTC m=+0.124628883 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + [[ ! -n '' ]]
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + . kolla_extend_start
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: Running command: 'neutron-ovn-metadata-agent'
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + umask 0022
Dec  3 01:32:57 compute-0 ovn_metadata_agent[288523]: + exec neutron-ovn-metadata-agent
Dec  3 01:32:57 compute-0 systemd[1]: Reloading.
Dec  3 01:32:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:32:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:32:58 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec  3 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:32:58 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec  3 01:32:58 compute-0 systemd[1]: session-53.scope: Consumed 1min 34.303s CPU time.
Dec  3 01:32:58 compute-0 systemd-logind[800]: Session 53 logged out. Waiting for processes to exit.
Dec  3 01:32:58 compute-0 systemd-logind[800]: Removed session 53.
Dec  3 01:32:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.537 288528 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.537 288528 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.537 288528 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.538 288528 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.539 288528 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.540 288528 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.541 288528 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.542 288528 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.543 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.544 288528 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.545 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.546 288528 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.547 288528 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.548 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.549 288528 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.550 288528 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.551 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.552 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.553 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.554 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.555 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.556 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.557 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.558 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.559 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.560 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.561 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.562 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.563 288528 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.564 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.565 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.566 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.567 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.568 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.569 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.570 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.571 288528 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.572 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.573 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.574 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.575 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.576 288528 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.586 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.586 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.586 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.587 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.587 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.601 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name eda9fd7d-f2b1-4121-b9ac-fc31f8426272 (UUID: eda9fd7d-f2b1-4121-b9ac-fc31f8426272) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.642 288528 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.643 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.643 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.643 288528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.646 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.652 288528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.657 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'eda9fd7d-f2b1-4121-b9ac-fc31f8426272'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], external_ids={}, name=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, nb_cfg_timestamp=1764723909412, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.658 288528 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f652f23de80>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.660 288528 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.664 288528 DEBUG oslo_service.service [-] Started child 288634 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.667 288528 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpox5gvgqk/privsep.sock']#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.669 288634 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-429760'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.704 288634 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.705 288634 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.705 288634 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.713 288634 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.722 288634 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  3 01:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:32:59.731 288634 INFO eventlet.wsgi.server [-] (288634) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  3 01:32:59 compute-0 podman[158098]: time="2025-12-03T01:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7279 "" "Go-http-client/1.1"
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.400 288528 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.401 288528 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpox5gvgqk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.270 288639 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.277 288639 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.281 288639 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.281 288639 INFO oslo.privsep.daemon [-] privsep daemon running as pid 288639#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.406 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[12b09bcb-2264-4ae1-938f-1a29616dbed8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.926 288639 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.926 288639 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:33:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:00.926 288639 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:33:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:33:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:33:01 compute-0 openstack_network_exporter[160250]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:33:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.489 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[e48bf436-638a-4017-9606-d9ca126af20a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.491 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, column=external_ids, values=({'neutron:ovn-metadata-id': 'dfc124bb-8fd2-5454-9234-248aae16aad5'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.509 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.525 288528 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.526 288528 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.527 288528 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.528 288528 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.529 288528 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.530 288528 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.531 288528 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.532 288528 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.533 288528 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.534 288528 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.535 288528 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.536 288528 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.537 288528 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.538 288528 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.539 288528 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.540 288528 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.541 288528 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.542 288528 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.543 288528 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.544 288528 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.545 288528 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.545 288528 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.545 288528 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.546 288528 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.547 288528 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.547 288528 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.547 288528 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.548 288528 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.548 288528 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.549 288528 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.550 288528 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.551 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.552 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.553 288528 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.554 288528 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.555 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.556 288528 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.557 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.558 288528 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.559 288528 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.560 288528 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.561 288528 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.562 288528 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.563 288528 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.564 288528 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.565 288528 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.566 288528 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.567 288528 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.568 288528 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.568 288528 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.568 288528 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.569 288528 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.569 288528 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.569 288528 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.570 288528 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.570 288528 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.570 288528 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.571 288528 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.571 288528 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.572 288528 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.572 288528 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.572 288528 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.573 288528 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.574 288528 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.575 288528 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.576 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.577 288528 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.578 288528 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.579 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.580 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.581 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.582 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.583 288528 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:33:01 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:01.583 288528 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 01:33:01 compute-0 podman[288644]: 2025-12-03 01:33:01.90837816 +0000 UTC m=+0.155768473 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:33:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:04 compute-0 systemd-logind[800]: New session 54 of user zuul.
Dec  3 01:33:04 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec  3 01:33:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:06 compute-0 python3.9[288821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:33:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:07 compute-0 python3.9[288977]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:09 compute-0 python3.9[289142]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:33:09 compute-0 systemd[1]: Reloading.
Dec  3 01:33:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:33:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:33:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:11 compute-0 python3.9[289327]: ansible-ansible.builtin.service_facts Invoked
Dec  3 01:33:12 compute-0 network[289344]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 01:33:12 compute-0 network[289345]: 'network-scripts' will be removed from distribution in near future.
Dec  3 01:33:12 compute-0 network[289346]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 01:33:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:17 compute-0 podman[289522]: 2025-12-03 01:33:17.858257678 +0000 UTC m=+0.097344888 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  3 01:33:17 compute-0 podman[289520]: 2025-12-03 01:33:17.865612539 +0000 UTC m=+0.102519110 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:33:17 compute-0 podman[289514]: 2025-12-03 01:33:17.88727142 +0000 UTC m=+0.130525394 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:33:17 compute-0 podman[289525]: 2025-12-03 01:33:17.934660284 +0000 UTC m=+0.161789128 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  3 01:33:18 compute-0 python3.9[289704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:33:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:19 compute-0 podman[289830]: 2025-12-03 01:33:19.831227495 +0000 UTC m=+0.141707009 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:33:21 compute-0 python3.9[289877]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:33:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:22 compute-0 python3.9[290109]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:33:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:22 compute-0 podman[290203]: 2025-12-03 01:33:22.864677421 +0000 UTC m=+0.109979404 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:33:22 compute-0 podman[290203]: 2025-12-03 01:33:22.972141784 +0000 UTC m=+0.217443727 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:33:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:23 compute-0 python3.9[290443]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:33:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:33:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:33:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:25 compute-0 python3.9[290755]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:33:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ec3a39e7-1825-499c-92ec-925d74f66884 does not exist
Dec  3 01:33:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f2e44fd1-b3ca-4044-a463-8f98c39bb14e does not exist
Dec  3 01:33:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ca79ad86-911e-4577-ad31-70f7c4c0abca does not exist
Dec  3 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:33:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:33:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:33:26 compute-0 python3.9[291039]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.642369962 +0000 UTC m=+0.079153982 container create 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.608673412 +0000 UTC m=+0.045457502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:33:26 compute-0 systemd[1]: Started libpod-conmon-508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82.scope.
Dec  3 01:33:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.798956966 +0000 UTC m=+0.235741046 container init 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.81266344 +0000 UTC m=+0.249447480 container start 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.820209656 +0000 UTC m=+0.256993676 container attach 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:33:26 compute-0 distracted_ganguly[291142]: 167 167
Dec  3 01:33:26 compute-0 systemd[1]: libpod-508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82.scope: Deactivated successfully.
Dec  3 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.822854398 +0000 UTC m=+0.259638408 container died 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a98311949d94dca6fd6911e3ab8890c046a01e8499e3861e968f6b161c136f65-merged.mount: Deactivated successfully.
Dec  3 01:33:26 compute-0 podman[291087]: 2025-12-03 01:33:26.919027004 +0000 UTC m=+0.355811004 container remove 508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:33:26 compute-0 systemd[1]: libpod-conmon-508d0d8bf0996fae9046c0f006cf9d5fe75b7c912f60adc9b64dd04887c0bf82.scope: Deactivated successfully.
Dec  3 01:33:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.169007388 +0000 UTC m=+0.075426580 container create 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:33:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:27 compute-0 systemd[1]: Started libpod-conmon-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope.
Dec  3 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.143473111 +0000 UTC m=+0.049892393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:33:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:27 compute-0 podman[291254]: 2025-12-03 01:33:27.342281038 +0000 UTC m=+0.122220418 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, architecture=x86_64, distribution-scope=public)
Dec  3 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.34308766 +0000 UTC m=+0.249506902 container init 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.35409761 +0000 UTC m=+0.260516832 container start 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:33:27 compute-0 podman[291221]: 2025-12-03 01:33:27.36141689 +0000 UTC m=+0.267836152 container attach 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:33:27 compute-0 python3.9[291304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:33:28
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta']
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:33:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:33:28 compute-0 beautiful_chandrasekhar[291269]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:33:28 compute-0 beautiful_chandrasekhar[291269]: --> relative data size: 1.0
Dec  3 01:33:28 compute-0 beautiful_chandrasekhar[291269]: --> All data devices are unavailable
Dec  3 01:33:28 compute-0 podman[291221]: 2025-12-03 01:33:28.687245261 +0000 UTC m=+1.593664463 container died 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:33:28 compute-0 systemd[1]: libpod-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope: Deactivated successfully.
Dec  3 01:33:28 compute-0 systemd[1]: libpod-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope: Consumed 1.260s CPU time.
Dec  3 01:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6881f95f3877a4f3d03055d07584613729df2ae24559a9f0c42d5325b94a2504-merged.mount: Deactivated successfully.
Dec  3 01:33:28 compute-0 podman[291221]: 2025-12-03 01:33:28.785482783 +0000 UTC m=+1.691901975 container remove 0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chandrasekhar, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:33:28 compute-0 systemd[1]: libpod-conmon-0394a5a09900a33edc8876911a5c03209cb168b30b2d09f6e699397a4d0654a3.scope: Deactivated successfully.
Dec  3 01:33:28 compute-0 podman[291456]: 2025-12-03 01:33:28.843477806 +0000 UTC m=+0.100769352 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  3 01:33:29 compute-0 python3.9[291521]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:29 compute-0 podman[158098]: time="2025-12-03T01:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7271 "" "Go-http-client/1.1"
Dec  3 01:33:29 compute-0 podman[291784]: 2025-12-03 01:33:29.896388538 +0000 UTC m=+0.080577041 container create f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:33:29 compute-0 podman[291784]: 2025-12-03 01:33:29.860753855 +0000 UTC m=+0.044942418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:33:29 compute-0 systemd[1]: Started libpod-conmon-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope.
Dec  3 01:33:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.075450516 +0000 UTC m=+0.259639079 container init f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.094414623 +0000 UTC m=+0.278603136 container start f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.100845319 +0000 UTC m=+0.285033872 container attach f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:33:30 compute-0 festive_swartz[291823]: 167 167
Dec  3 01:33:30 compute-0 systemd[1]: libpod-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope: Deactivated successfully.
Dec  3 01:33:30 compute-0 conmon[291823]: conmon f5c695b4dcea40ea8048 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope/container/memory.events
Dec  3 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.110633246 +0000 UTC m=+0.294821749 container died f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:33:30 compute-0 python3.9[291820]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbe857c19c710c61e6a7e5408d445c5e2aa2309edde1cfd9aed05220fc3c3c62-merged.mount: Deactivated successfully.
Dec  3 01:33:30 compute-0 podman[291784]: 2025-12-03 01:33:30.192932853 +0000 UTC m=+0.377121366 container remove f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:33:30 compute-0 systemd[1]: libpod-conmon-f5c695b4dcea40ea8048a0ea79f12c5b80a55eb776a7f260c2c985faf34de6d4.scope: Deactivated successfully.
Dec  3 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.466522691 +0000 UTC m=+0.087241292 container create 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.434316762 +0000 UTC m=+0.055035423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:33:30 compute-0 systemd[1]: Started libpod-conmon-8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798.scope.
Dec  3 01:33:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.643406259 +0000 UTC m=+0.264124920 container init 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec  3 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.662875131 +0000 UTC m=+0.283593712 container start 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:33:30 compute-0 podman[291877]: 2025-12-03 01:33:30.66833981 +0000 UTC m=+0.289058471 container attach 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:33:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:31 compute-0 python3.9[292016]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:33:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:33:31 compute-0 openstack_network_exporter[160250]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:33:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:33:31 compute-0 clever_black[291930]: {
Dec  3 01:33:31 compute-0 clever_black[291930]:    "0": [
Dec  3 01:33:31 compute-0 clever_black[291930]:        {
Dec  3 01:33:31 compute-0 clever_black[291930]:            "devices": [
Dec  3 01:33:31 compute-0 clever_black[291930]:                "/dev/loop3"
Dec  3 01:33:31 compute-0 clever_black[291930]:            ],
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_name": "ceph_lv0",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_size": "21470642176",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "name": "ceph_lv0",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "tags": {
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cluster_name": "ceph",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.crush_device_class": "",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.encrypted": "0",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osd_id": "0",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.type": "block",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.vdo": "0"
Dec  3 01:33:31 compute-0 clever_black[291930]:            },
Dec  3 01:33:31 compute-0 clever_black[291930]:            "type": "block",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "vg_name": "ceph_vg0"
Dec  3 01:33:31 compute-0 clever_black[291930]:        }
Dec  3 01:33:31 compute-0 clever_black[291930]:    ],
Dec  3 01:33:31 compute-0 clever_black[291930]:    "1": [
Dec  3 01:33:31 compute-0 clever_black[291930]:        {
Dec  3 01:33:31 compute-0 clever_black[291930]:            "devices": [
Dec  3 01:33:31 compute-0 clever_black[291930]:                "/dev/loop4"
Dec  3 01:33:31 compute-0 clever_black[291930]:            ],
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_name": "ceph_lv1",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_size": "21470642176",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "name": "ceph_lv1",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "tags": {
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cluster_name": "ceph",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.crush_device_class": "",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.encrypted": "0",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osd_id": "1",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.type": "block",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.vdo": "0"
Dec  3 01:33:31 compute-0 clever_black[291930]:            },
Dec  3 01:33:31 compute-0 clever_black[291930]:            "type": "block",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "vg_name": "ceph_vg1"
Dec  3 01:33:31 compute-0 clever_black[291930]:        }
Dec  3 01:33:31 compute-0 clever_black[291930]:    ],
Dec  3 01:33:31 compute-0 clever_black[291930]:    "2": [
Dec  3 01:33:31 compute-0 clever_black[291930]:        {
Dec  3 01:33:31 compute-0 clever_black[291930]:            "devices": [
Dec  3 01:33:31 compute-0 clever_black[291930]:                "/dev/loop5"
Dec  3 01:33:31 compute-0 clever_black[291930]:            ],
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_name": "ceph_lv2",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_size": "21470642176",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "name": "ceph_lv2",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "tags": {
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.cluster_name": "ceph",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.crush_device_class": "",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.encrypted": "0",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osd_id": "2",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.type": "block",
Dec  3 01:33:31 compute-0 clever_black[291930]:                "ceph.vdo": "0"
Dec  3 01:33:31 compute-0 clever_black[291930]:            },
Dec  3 01:33:31 compute-0 clever_black[291930]:            "type": "block",
Dec  3 01:33:31 compute-0 clever_black[291930]:            "vg_name": "ceph_vg2"
Dec  3 01:33:31 compute-0 clever_black[291930]:        }
Dec  3 01:33:31 compute-0 clever_black[291930]:    ]
Dec  3 01:33:31 compute-0 clever_black[291930]: }
Dec  3 01:33:31 compute-0 systemd[1]: libpod-8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798.scope: Deactivated successfully.
Dec  3 01:33:31 compute-0 podman[291877]: 2025-12-03 01:33:31.507617989 +0000 UTC m=+1.128336590 container died 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f6188e14fa5835c969a8cff49e1a71cbc4fbaa8732f726e679fb9eee249794f-merged.mount: Deactivated successfully.
Dec  3 01:33:31 compute-0 podman[291877]: 2025-12-03 01:33:31.606806507 +0000 UTC m=+1.227525088 container remove 8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_black, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:33:31 compute-0 systemd[1]: libpod-conmon-8ac924686fdaed98c0f1d284d53a71cbf7ed63a9c3fafc0451309d88e1eba798.scope: Deactivated successfully.
Dec  3 01:33:32 compute-0 podman[292253]: 2025-12-03 01:33:32.156114162 +0000 UTC m=+0.121595830 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:33:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:32 compute-0 python3.9[292271]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.684926797 +0000 UTC m=+0.086444851 container create 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.656870981 +0000 UTC m=+0.058389035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:33:32 compute-0 systemd[1]: Started libpod-conmon-6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742.scope.
Dec  3 01:33:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.835098466 +0000 UTC m=+0.236616550 container init 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.854058904 +0000 UTC m=+0.255576938 container start 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.861381484 +0000 UTC m=+0.262899578 container attach 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:33:32 compute-0 angry_aryabhata[292438]: 167 167
Dec  3 01:33:32 compute-0 systemd[1]: libpod-6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742.scope: Deactivated successfully.
Dec  3 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.868283842 +0000 UTC m=+0.269801856 container died 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b0fa54bfb6c60774ed73bf47b57c47cde1795d5f83e701aa2b47d6a499571f0-merged.mount: Deactivated successfully.
Dec  3 01:33:32 compute-0 podman[292390]: 2025-12-03 01:33:32.945765177 +0000 UTC m=+0.347283221 container remove 6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_aryabhata, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 01:33:32 compute-0 systemd[1]: libpod-conmon-6c35d38196f94ebee51250ebdcd5de2e53d040bbb935bf656207e14d5e35a742.scope: Deactivated successfully.
Dec  3 01:33:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.180334801 +0000 UTC m=+0.077332272 container create 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.14916839 +0000 UTC m=+0.046165861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:33:33 compute-0 systemd[1]: Started libpod-conmon-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope.
Dec  3 01:33:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.367395337 +0000 UTC m=+0.264392808 container init 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.401282812 +0000 UTC m=+0.298280283 container start 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:33:33 compute-0 podman[292463]: 2025-12-03 01:33:33.408609592 +0000 UTC m=+0.305607103 container attach 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:33:34 compute-0 python3.9[292560]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]: {
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "osd_id": 2,
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "type": "bluestore"
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:    },
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "osd_id": 1,
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "type": "bluestore"
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:    },
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "osd_id": 0,
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:        "type": "bluestore"
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]:    }
Dec  3 01:33:34 compute-0 nostalgic_aryabhata[292480]: }
Dec  3 01:33:34 compute-0 systemd[1]: libpod-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope: Deactivated successfully.
Dec  3 01:33:34 compute-0 systemd[1]: libpod-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope: Consumed 1.202s CPU time.
Dec  3 01:33:34 compute-0 podman[292636]: 2025-12-03 01:33:34.670038806 +0000 UTC m=+0.044432594 container died 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-aacaad438d5db30bfa0c05ced1ed8091d267a1b1ce2f29272dc610896987c8dd-merged.mount: Deactivated successfully.
Dec  3 01:33:34 compute-0 podman[292636]: 2025-12-03 01:33:34.758367287 +0000 UTC m=+0.132761005 container remove 140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:33:34 compute-0 systemd[1]: libpod-conmon-140266db9537fdac9794a9bd8dbb4f7467f66a35d8371bbd7b9443b8b81a78f5.scope: Deactivated successfully.
Dec  3 01:33:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:33:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:33:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4b512dc1-e29c-4c24-abee-69f8cdecccb8 does not exist
Dec  3 01:33:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 82b379cb-fba0-4778-99e7-c2759aae589f does not exist
Dec  3 01:33:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:33:35 compute-0 python3.9[292802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:36 compute-0 python3.9[292954]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:33:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:33:38 compute-0 python3.9[293106]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:39 compute-0 python3.9[293258]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:40 compute-0 python3.9[293412]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.972 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.973 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.973 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.975 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.976 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.990 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.991 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:33:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:33:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:41 compute-0 python3.9[293567]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:42 compute-0 python3.9[293719]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:43 compute-0 python3.9[293871]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:44 compute-0 python3.9[294023]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:33:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:45 compute-0 python3.9[294175]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:46 compute-0 python3.9[294327]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 01:33:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:48 compute-0 podman[294452]: 2025-12-03 01:33:48.875778275 +0000 UTC m=+0.108503903 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4)
Dec  3 01:33:48 compute-0 podman[294451]: 2025-12-03 01:33:48.888602825 +0000 UTC m=+0.122170866 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 01:33:48 compute-0 podman[294447]: 2025-12-03 01:33:48.888633966 +0000 UTC m=+0.127537163 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:33:48 compute-0 podman[294454]: 2025-12-03 01:33:48.908628122 +0000 UTC m=+0.125707573 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 01:33:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:49 compute-0 python3.9[294558]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:33:49 compute-0 systemd[1]: Reloading.
Dec  3 01:33:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:33:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:33:50 compute-0 podman[294704]: 2025-12-03 01:33:50.895151048 +0000 UTC m=+0.144913547 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 01:33:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:51 compute-0 python3.9[294775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:52 compute-0 python3.9[294928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:53 compute-0 python3.9[295081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:54 compute-0 python3.9[295234]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:55 compute-0 python3.9[295387]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:56 compute-0 python3.9[295540]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:33:57 compute-0 podman[295665]: 2025-12-03 01:33:57.879158493 +0000 UTC m=+0.154191780 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, name=ubi9)
Dec  3 01:33:58 compute-0 python3.9[295709]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:33:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:33:59 compute-0 podman[295835]: 2025-12-03 01:33:59.444300377 +0000 UTC m=+0.106548569 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 01:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:59.589 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:59.590 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:33:59.590 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:33:59 compute-0 python3.9[295881]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  3 01:33:59 compute-0 podman[158098]: time="2025-12-03T01:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7275 "" "Go-http-client/1.1"
Dec  3 01:34:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:01 compute-0 python3.9[296034]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:34:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:34:01 compute-0 openstack_network_exporter[160250]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:34:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:02 compute-0 podman[296090]: 2025-12-03 01:34:02.487253982 +0000 UTC m=+0.135913621 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:34:02 compute-0 python3.9[296141]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:34:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:05 compute-0 python3.9[296294]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 01:34:07 compute-0 python3.9[296449]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 01:34:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:08 compute-0 python3.9[296604]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 01:34:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:09 compute-0 python3.9[296759]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 01:34:10 compute-0 python3.9[296916]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:12 compute-0 python3.9[297071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:13 compute-0 python3.9[297226]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:14 compute-0 python3.9[297381]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:16 compute-0 python3.9[297536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:17 compute-0 python3.9[297691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 01:34:19 compute-0 python3.9[297846]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:19 compute-0 podman[297848]: 2025-12-03 01:34:19.229653965 +0000 UTC m=+0.103209088 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:34:19 compute-0 podman[297849]: 2025-12-03 01:34:19.249796665 +0000 UTC m=+0.122775643 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Dec  3 01:34:19 compute-0 podman[297851]: 2025-12-03 01:34:19.271694603 +0000 UTC m=+0.141360600 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:34:19 compute-0 podman[297850]: 2025-12-03 01:34:19.289743315 +0000 UTC m=+0.159840024 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 01:34:20 compute-0 python3.9[298085]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:21 compute-0 podman[298212]: 2025-12-03 01:34:21.353807659 +0000 UTC m=+0.114334252 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 01:34:21 compute-0 python3.9[298260]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:23 compute-0 python3.9[298415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:24 compute-0 python3.9[298570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:25 compute-0 python3.9[298725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:27 compute-0 python3.9[298880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:34:28
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'backups', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta']
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:34:28 compute-0 podman[298960]: 2025-12-03 01:34:28.407159597 +0000 UTC m=+0.133005112 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, config_id=edpm, io.buildah.version=1.29.0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:34:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:34:29 compute-0 python3.9[299054]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:29 compute-0 podman[158098]: time="2025-12-03T01:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:34:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:34:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7284 "" "Go-http-client/1.1"
Dec  3 01:34:29 compute-0 podman[299157]: 2025-12-03 01:34:29.904060368 +0000 UTC m=+0.143350814 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:34:30 compute-0 python3.9[299227]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:34:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:34:31 compute-0 openstack_network_exporter[160250]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:34:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:34:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:32 compute-0 python3.9[299382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:32 compute-0 podman[299384]: 2025-12-03 01:34:32.78499693 +0000 UTC m=+0.161996573 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:34:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:33 compute-0 python3.9[299563]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:35 compute-0 python3.9[299718]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:34:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 11954f0f-1c9f-4ffd-adbc-f97673661b8b does not exist
Dec  3 01:34:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 462f97c5-4003-41b1-b4ab-0f4643b841ae does not exist
Dec  3 01:34:36 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 691d94a4-5de2-4922-8f7c-716ed0f8517e does not exist
Dec  3 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:34:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:34:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:34:36 compute-0 python3.9[299990]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:34:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.382112159 +0000 UTC m=+0.106110697 container create a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.328761213 +0000 UTC m=+0.052759811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:34:37 compute-0 systemd[1]: Started libpod-conmon-a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2.scope.
Dec  3 01:34:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.525137053 +0000 UTC m=+0.249135571 container init a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.541472219 +0000 UTC m=+0.265470747 container start a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.549002175 +0000 UTC m=+0.273000703 container attach a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 01:34:37 compute-0 eloquent_ardinghelli[300311]: 167 167
Dec  3 01:34:37 compute-0 systemd[1]: libpod-a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2.scope: Deactivated successfully.
Dec  3 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.554365361 +0000 UTC m=+0.278363889 container died a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:34:37 compute-0 python3.9[300301]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 01:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12123f46fa9101f7ebc446b364828b85236da4457f6b9fd7ec64d993b8912b0-merged.mount: Deactivated successfully.
Dec  3 01:34:37 compute-0 podman[300293]: 2025-12-03 01:34:37.636828902 +0000 UTC m=+0.360827440 container remove a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:34:37 compute-0 systemd[1]: libpod-conmon-a4cde70839b769a53f9afc42521ff5c4c356bc4792f785b4ce06018175fadec2.scope: Deactivated successfully.
Dec  3 01:34:37 compute-0 podman[300344]: 2025-12-03 01:34:37.8994143 +0000 UTC m=+0.076633053 container create fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 podman[300344]: 2025-12-03 01:34:37.862425981 +0000 UTC m=+0.039644774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:34:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:34:37 compute-0 systemd[1]: Started libpod-conmon-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope.
Dec  3 01:34:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:38 compute-0 podman[300344]: 2025-12-03 01:34:38.072654749 +0000 UTC m=+0.249873492 container init fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:34:38 compute-0 podman[300344]: 2025-12-03 01:34:38.104593461 +0000 UTC m=+0.281812204 container start fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:34:38 compute-0 podman[300344]: 2025-12-03 01:34:38.111729336 +0000 UTC m=+0.288948079 container attach fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:34:38 compute-0 python3.9[300508]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:34:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:39 compute-0 elastic_buck[300376]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:34:39 compute-0 elastic_buck[300376]: --> relative data size: 1.0
Dec  3 01:34:39 compute-0 elastic_buck[300376]: --> All data devices are unavailable
Dec  3 01:34:39 compute-0 systemd[1]: libpod-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope: Deactivated successfully.
Dec  3 01:34:39 compute-0 podman[300344]: 2025-12-03 01:34:39.40501549 +0000 UTC m=+1.582234223 container died fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:34:39 compute-0 systemd[1]: libpod-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope: Consumed 1.246s CPU time.
Dec  3 01:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fd434fb70ce101eabd3c6490419309008cc2d376eff3afe1c4330c385a0438e-merged.mount: Deactivated successfully.
Dec  3 01:34:39 compute-0 podman[300344]: 2025-12-03 01:34:39.523023 +0000 UTC m=+1.700241733 container remove fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_buck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:34:39 compute-0 systemd[1]: libpod-conmon-fce3aa3ef78d874c14ace303b0424d800dc0fb077bea6d396b293e143e6af472.scope: Deactivated successfully.
Dec  3 01:34:40 compute-0 python3.9[300745]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.635292882 +0000 UTC m=+0.082015699 container create 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.599243438 +0000 UTC m=+0.045966295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:34:40 compute-0 systemd[1]: Started libpod-conmon-037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff.scope.
Dec  3 01:34:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.762148075 +0000 UTC m=+0.208870902 container init 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.775123309 +0000 UTC m=+0.221846096 container start 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.780222909 +0000 UTC m=+0.226945696 container attach 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:34:40 compute-0 eager_wilson[300975]: 167 167
Dec  3 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.787120377 +0000 UTC m=+0.233843174 container died 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec  3 01:34:40 compute-0 systemd[1]: libpod-037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff.scope: Deactivated successfully.
Dec  3 01:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-942cb886e497193f9fcc82ed95e08c32d1ff131da947ba950d1c66326fe45823-merged.mount: Deactivated successfully.
Dec  3 01:34:40 compute-0 podman[300935]: 2025-12-03 01:34:40.862909856 +0000 UTC m=+0.309632673 container remove 037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:34:40 compute-0 systemd[1]: libpod-conmon-037c3d047224b46c8c62073afc47a9659aa1041d683bcf094b70db72394942ff.scope: Deactivated successfully.
Dec  3 01:34:41 compute-0 python3.9[301019]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.164975111 +0000 UTC m=+0.101912523 container create efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:34:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.12570855 +0000 UTC m=+0.062646062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:34:41 compute-0 systemd[1]: Started libpod-conmon-efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f.scope.
Dec  3 01:34:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.296467951 +0000 UTC m=+0.233405403 container init efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.30814169 +0000 UTC m=+0.245079072 container start efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:34:41 compute-0 podman[301027]: 2025-12-03 01:34:41.312769876 +0000 UTC m=+0.249707308 container attach efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:34:42 compute-0 jolly_spence[301064]: {
Dec  3 01:34:42 compute-0 jolly_spence[301064]:    "0": [
Dec  3 01:34:42 compute-0 jolly_spence[301064]:        {
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "devices": [
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "/dev/loop3"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            ],
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_name": "ceph_lv0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_size": "21470642176",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "name": "ceph_lv0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "tags": {
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cluster_name": "ceph",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.crush_device_class": "",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.encrypted": "0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osd_id": "0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.type": "block",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.vdo": "0"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            },
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "type": "block",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "vg_name": "ceph_vg0"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:        }
Dec  3 01:34:42 compute-0 jolly_spence[301064]:    ],
Dec  3 01:34:42 compute-0 jolly_spence[301064]:    "1": [
Dec  3 01:34:42 compute-0 jolly_spence[301064]:        {
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "devices": [
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "/dev/loop4"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            ],
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_name": "ceph_lv1",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_size": "21470642176",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "name": "ceph_lv1",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "tags": {
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cluster_name": "ceph",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.crush_device_class": "",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.encrypted": "0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osd_id": "1",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.type": "block",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.vdo": "0"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            },
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "type": "block",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "vg_name": "ceph_vg1"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:        }
Dec  3 01:34:42 compute-0 jolly_spence[301064]:    ],
Dec  3 01:34:42 compute-0 jolly_spence[301064]:    "2": [
Dec  3 01:34:42 compute-0 jolly_spence[301064]:        {
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "devices": [
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "/dev/loop5"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            ],
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_name": "ceph_lv2",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_size": "21470642176",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "name": "ceph_lv2",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "tags": {
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.cluster_name": "ceph",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.crush_device_class": "",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.encrypted": "0",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osd_id": "2",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.type": "block",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:                "ceph.vdo": "0"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            },
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "type": "block",
Dec  3 01:34:42 compute-0 jolly_spence[301064]:            "vg_name": "ceph_vg2"
Dec  3 01:34:42 compute-0 jolly_spence[301064]:        }
Dec  3 01:34:42 compute-0 jolly_spence[301064]:    ]
Dec  3 01:34:42 compute-0 jolly_spence[301064]: }
Dec  3 01:34:42 compute-0 systemd[1]: libpod-efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f.scope: Deactivated successfully.
Dec  3 01:34:42 compute-0 podman[301027]: 2025-12-03 01:34:42.184011779 +0000 UTC m=+1.120949181 container died efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:34:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7339c3af79415305ad373308cad408ccf286ec9a4b01e1ea0c6a31dca0b64cc5-merged.mount: Deactivated successfully.
Dec  3 01:34:42 compute-0 python3.9[301199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:34:42 compute-0 podman[301027]: 2025-12-03 01:34:42.279938527 +0000 UTC m=+1.216875929 container remove efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:34:42 compute-0 systemd[1]: libpod-conmon-efca9664760330afa638e65cce576d16208adccf12d9ae7d7b479353e8f3e07f.scope: Deactivated successfully.
Dec  3 01:34:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:43 compute-0 python3.9[301489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.327939374 +0000 UTC m=+0.069715674 container create 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 01:34:43 compute-0 systemd[1]: Started libpod-conmon-35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3.scope.
Dec  3 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.306034746 +0000 UTC m=+0.047811126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:34:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.461179861 +0000 UTC m=+0.202956191 container init 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.478074783 +0000 UTC m=+0.219851113 container start 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:34:43 compute-0 magical_curran[301523]: 167 167
Dec  3 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.484740755 +0000 UTC m=+0.226517145 container attach 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:34:43 compute-0 systemd[1]: libpod-35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3.scope: Deactivated successfully.
Dec  3 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.487292904 +0000 UTC m=+0.229069224 container died 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 01:34:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e9481c271846f5faf1110e6455b62458bd0f8d8f0aa6e337d70e9d041cbb3fd-merged.mount: Deactivated successfully.
Dec  3 01:34:43 compute-0 podman[301507]: 2025-12-03 01:34:43.557923482 +0000 UTC m=+0.299699802 container remove 35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:34:43 compute-0 systemd[1]: libpod-conmon-35f1b8b8cfc0c67e48e4245e3729cae84bfb6ee1e3a3e983d27a54daa15e73c3.scope: Deactivated successfully.
Dec  3 01:34:43 compute-0 podman[301557]: 2025-12-03 01:34:43.822285479 +0000 UTC m=+0.067791602 container create 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:34:43 compute-0 podman[301557]: 2025-12-03 01:34:43.787711085 +0000 UTC m=+0.033217258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:34:43 compute-0 systemd[1]: Started libpod-conmon-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope.
Dec  3 01:34:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:34:44 compute-0 podman[301557]: 2025-12-03 01:34:44.012697957 +0000 UTC m=+0.258204090 container init 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:34:44 compute-0 podman[301557]: 2025-12-03 01:34:44.029195927 +0000 UTC m=+0.274702050 container start 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:34:44 compute-0 podman[301557]: 2025-12-03 01:34:44.038959753 +0000 UTC m=+0.284465877 container attach 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:34:44 compute-0 python3.9[301715]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:34:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]: {
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "osd_id": 2,
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "type": "bluestore"
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:    },
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "osd_id": 1,
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "type": "bluestore"
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:    },
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "osd_id": 0,
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:        "type": "bluestore"
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]:    }
Dec  3 01:34:45 compute-0 sleepy_roentgen[301596]: }
Dec  3 01:34:45 compute-0 systemd[1]: libpod-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope: Deactivated successfully.
Dec  3 01:34:45 compute-0 systemd[1]: libpod-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope: Consumed 1.252s CPU time.
Dec  3 01:34:45 compute-0 podman[301557]: 2025-12-03 01:34:45.293825248 +0000 UTC m=+1.539331361 container died 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 01:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-24da559fe4e4ae2cdbea637053a788086aaa413296dd29682d962829a2004d84-merged.mount: Deactivated successfully.
Dec  3 01:34:45 compute-0 podman[301557]: 2025-12-03 01:34:45.396911802 +0000 UTC m=+1.642417925 container remove 8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:34:45 compute-0 systemd[1]: libpod-conmon-8b9912179f3a2425d2835725bc8e689b5aadc728df399f1aaeb8e845ab6326b6.scope: Deactivated successfully.
Dec  3 01:34:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:34:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:34:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:34:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:34:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7ce3e72a-3e6a-4554-848d-7ac110192202 does not exist
Dec  3 01:34:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a4f95231-2302-4f84-9ef6-64d22834ef8d does not exist
Dec  3 01:34:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:34:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:34:46 compute-0 python3.9[301956]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:34:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:47 compute-0 python3.9[302034]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:34:48 compute-0 python3.9[302186]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:34:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:49 compute-0 podman[302236]: 2025-12-03 01:34:49.522837454 +0000 UTC m=+0.125784349 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:34:49 compute-0 podman[302238]: 2025-12-03 01:34:49.534649137 +0000 UTC m=+0.128559738 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 01:34:49 compute-0 podman[302237]: 2025-12-03 01:34:49.550816573 +0000 UTC m=+0.148111809 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec  3 01:34:49 compute-0 podman[302239]: 2025-12-03 01:34:49.559027925 +0000 UTC m=+0.138359094 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:34:49 compute-0 python3.9[302338]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:34:50 compute-0 python3.9[302497]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:34:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:51 compute-0 python3.9[302575]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:34:51 compute-0 podman[302576]: 2025-12-03 01:34:51.69463288 +0000 UTC m=+0.130992256 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  3 01:34:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:52 compute-0 python3.9[302746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:34:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:53 compute-0 python3.9[302826]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:34:54 compute-0 python3.9[302978]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:34:55 compute-0 python3.9[303056]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:34:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:56 compute-0 python3.9[303208]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:34:56 compute-0 python3.9[303286]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:34:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:34:57 compute-0 python3.9[303438]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:34:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:34:58 compute-0 podman[303464]: 2025-12-03 01:34:58.927661502 +0000 UTC m=+0.174916116 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler)
Dec  3 01:34:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:34:59 compute-0 python3.9[303535]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:34:59.590 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:34:59.591 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:34:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:34:59.591 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:34:59 compute-0 podman[158098]: time="2025-12-03T01:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:34:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:34:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7276 "" "Go-http-client/1.1"
Dec  3 01:35:00 compute-0 podman[303659]: 2025-12-03 01:35:00.310853502 +0000 UTC m=+0.151493544 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:35:00 compute-0 python3.9[303705]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:35:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:35:01 compute-0 openstack_network_exporter[160250]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:35:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:35:01 compute-0 python3.9[303783]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:02 compute-0 python3.9[303937]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  3 01:35:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:03 compute-0 podman[304042]: 2025-12-03 01:35:03.866714093 +0000 UTC m=+0.118008890 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:35:04 compute-0 python3.9[304114]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:05 compute-0 python3.9[304266]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:06 compute-0 python3.9[304418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:07 compute-0 python3.9[304570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:08 compute-0 python3.9[304722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:09 compute-0 python3.9[304874]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:11 compute-0 python3.9[305026]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:12 compute-0 python3.9[305178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:13 compute-0 python3.9[305330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:14 compute-0 python3.9[305482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:16 compute-0 python3.9[305634]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:17 compute-0 python3.9[305788]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:18 compute-0 python3.9[305940]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:19 compute-0 python3.9[306092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:19 compute-0 podman[306126]: 2025-12-03 01:35:19.892133041 +0000 UTC m=+0.122439555 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  3 01:35:19 compute-0 podman[306124]: 2025-12-03 01:35:19.892014948 +0000 UTC m=+0.129193606 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 01:35:19 compute-0 podman[306118]: 2025-12-03 01:35:19.905299943 +0000 UTC m=+0.145872546 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:35:19 compute-0 podman[306129]: 2025-12-03 01:35:19.93427337 +0000 UTC m=+0.151454783 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:35:20 compute-0 python3.9[306329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:21 compute-0 python3.9[306407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:22 compute-0 podman[306531]: 2025-12-03 01:35:22.140659295 +0000 UTC m=+0.129590167 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 01:35:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:22 compute-0 python3.9[306577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:23 compute-0 python3.9[306655]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:24 compute-0 python3.9[306807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:25 compute-0 python3.9[306885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:26 compute-0 python3.9[307037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:27 compute-0 python3.9[307115]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:35:28
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.log', 'vms', 'images', '.rgw.root', 'default.rgw.meta']
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:35:28 compute-0 python3.9[307267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:35:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:35:29 compute-0 podman[307345]: 2025-12-03 01:35:29.190873096 +0000 UTC m=+0.121626372 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, vcs-type=git, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 01:35:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:29 compute-0 python3.9[307346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:29 compute-0 podman[158098]: time="2025-12-03T01:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:35:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:35:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7275 "" "Go-http-client/1.1"
Dec  3 01:35:30 compute-0 python3.9[307516]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:30 compute-0 podman[307547]: 2025-12-03 01:35:30.865226551 +0000 UTC m=+0.118765671 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 01:35:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:31 compute-0 python3.9[307612]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:35:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:35:31 compute-0 openstack_network_exporter[160250]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:35:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:35:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:32 compute-0 python3.9[307766]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:32 compute-0 python3.9[307846]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:33 compute-0 python3.9[307998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:34 compute-0 podman[308048]: 2025-12-03 01:35:34.468143692 +0000 UTC m=+0.123878196 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:35:34 compute-0 python3.9[308099]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:35 compute-0 python3.9[308251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:37 compute-0 python3.9[308329]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:37 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:35:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:35:38 compute-0 python3.9[308481]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:39 compute-0 python3.9[308559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:40 compute-0 python3.9[308711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.973 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.974 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.992 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:35:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:35:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:41 compute-0 python3.9[308790]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.252473) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742252625, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1814, "num_deletes": 252, "total_data_size": 3122117, "memory_usage": 3178896, "flush_reason": "Manual Compaction"}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742277298, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1761679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11726, "largest_seqno": 13539, "table_properties": {"data_size": 1755754, "index_size": 3000, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14657, "raw_average_key_size": 20, "raw_value_size": 1742706, "raw_average_value_size": 2387, "num_data_blocks": 139, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725533, "oldest_key_time": 1764725533, "file_creation_time": 1764725742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 24943 microseconds, and 10907 cpu microseconds.
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.277404) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1761679 bytes OK
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.277449) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.280600) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.280625) EVENT_LOG_v1 {"time_micros": 1764725742280617, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.280648) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3114447, prev total WAL file size 3114447, number of live WAL files 2.
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.282470) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1720KB)], [29(7646KB)]
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742282518, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9591213, "oldest_snapshot_seqno": -1}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4001 keys, 7566547 bytes, temperature: kUnknown
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742362337, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7566547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7538035, "index_size": 17394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95173, "raw_average_key_size": 23, "raw_value_size": 7464064, "raw_average_value_size": 1865, "num_data_blocks": 759, "num_entries": 4001, "num_filter_entries": 4001, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.362815) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7566547 bytes
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.365931) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.0 rd, 94.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4418, records dropped: 417 output_compression: NoCompression
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.365962) EVENT_LOG_v1 {"time_micros": 1764725742365947, "job": 12, "event": "compaction_finished", "compaction_time_micros": 79910, "compaction_time_cpu_micros": 36518, "output_level": 6, "num_output_files": 1, "total_output_size": 7566547, "num_input_records": 4418, "num_output_records": 4001, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742366846, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725742370090, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.282267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:35:42 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:35:42.370439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:35:42 compute-0 python3.9[308944]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:43 compute-0 python3.9[309022]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:44 compute-0 python3.9[309174]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:45 compute-0 python3.9[309252]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:46 compute-0 python3.9[309435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:35:46 compute-0 python3.9[309595]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:35:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 978e453f-18a3-4910-b21b-6bde7c5f1158 does not exist
Dec  3 01:35:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d34dfe07-3f09-4cee-9bda-ba5acd74fd51 does not exist
Dec  3 01:35:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 644b7a25-c0db-4d30-b319-22237a2ff141 does not exist
Dec  3 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:35:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:35:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:35:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:35:47 compute-0 python3.9[309862]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.25820778 +0000 UTC m=+0.086210993 container create 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.224714175 +0000 UTC m=+0.052717438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:35:48 compute-0 systemd[1]: Started libpod-conmon-84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47.scope.
Dec  3 01:35:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.410495376 +0000 UTC m=+0.238498629 container init 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.429943905 +0000 UTC m=+0.257947118 container start 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.437481517 +0000 UTC m=+0.265484730 container attach 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:35:48 compute-0 peaceful_dijkstra[309947]: 167 167
Dec  3 01:35:48 compute-0 systemd[1]: libpod-84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47.scope: Deactivated successfully.
Dec  3 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.442769096 +0000 UTC m=+0.270772309 container died 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b45c80d8fc3f24fb85898b32b21b9f0a2938ddcf7bda2465499f66d531e3c308-merged.mount: Deactivated successfully.
Dec  3 01:35:48 compute-0 podman[309925]: 2025-12-03 01:35:48.527093115 +0000 UTC m=+0.355096298 container remove 84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:35:48 compute-0 systemd[1]: libpod-conmon-84b9a0695195aa37eb7be54ba19cd948aa25002746662e209991cd5e17446d47.scope: Deactivated successfully.
Dec  3 01:35:48 compute-0 podman[310014]: 2025-12-03 01:35:48.807426444 +0000 UTC m=+0.094819846 container create 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 01:35:48 compute-0 podman[310014]: 2025-12-03 01:35:48.772222541 +0000 UTC m=+0.059616013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:35:48 compute-0 systemd[1]: Started libpod-conmon-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope.
Dec  3 01:35:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:49 compute-0 podman[310014]: 2025-12-03 01:35:49.028256564 +0000 UTC m=+0.315649966 container init 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:35:49 compute-0 podman[310014]: 2025-12-03 01:35:49.046132168 +0000 UTC m=+0.333525550 container start 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:35:49 compute-0 podman[310014]: 2025-12-03 01:35:49.052119497 +0000 UTC m=+0.339512949 container attach 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:35:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:49 compute-0 python3.9[310111]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  3 01:35:50 compute-0 nostalgic_kilby[310030]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:35:50 compute-0 nostalgic_kilby[310030]: --> relative data size: 1.0
Dec  3 01:35:50 compute-0 nostalgic_kilby[310030]: --> All data devices are unavailable
Dec  3 01:35:50 compute-0 systemd[1]: libpod-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope: Deactivated successfully.
Dec  3 01:35:50 compute-0 systemd[1]: libpod-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope: Consumed 1.244s CPU time.
Dec  3 01:35:50 compute-0 podman[310136]: 2025-12-03 01:35:50.444535308 +0000 UTC m=+0.070069168 container died 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8290eab74f0fac425f80b2b6f86cb32465d42e3f6e4d6ffb4216308f8e7d3e88-merged.mount: Deactivated successfully.
Dec  3 01:35:50 compute-0 podman[310136]: 2025-12-03 01:35:50.546051522 +0000 UTC m=+0.171585352 container remove 9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:35:50 compute-0 podman[310143]: 2025-12-03 01:35:50.55163989 +0000 UTC m=+0.137001706 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, distribution-scope=public)
Dec  3 01:35:50 compute-0 systemd[1]: libpod-conmon-9e71d77b802fd6b244d2e0247c21a3e1e9914a83f18ce198dc809a173b1fdd2c.scope: Deactivated successfully.
Dec  3 01:35:50 compute-0 podman[310137]: 2025-12-03 01:35:50.565773899 +0000 UTC m=+0.158475882 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:35:50 compute-0 podman[310145]: 2025-12-03 01:35:50.571318745 +0000 UTC m=+0.150133727 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:35:50 compute-0 podman[310147]: 2025-12-03 01:35:50.583792597 +0000 UTC m=+0.171153140 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  3 01:35:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.636839073 +0000 UTC m=+0.092955603 container create c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.599357146 +0000 UTC m=+0.055473716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:35:51 compute-0 systemd[1]: Started libpod-conmon-c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7.scope.
Dec  3 01:35:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.792392342 +0000 UTC m=+0.248508922 container init c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.811058138 +0000 UTC m=+0.267174658 container start c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.817642524 +0000 UTC m=+0.273759054 container attach c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:35:51 compute-0 quizzical_carver[310431]: 167 167
Dec  3 01:35:51 compute-0 systemd[1]: libpod-c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7.scope: Deactivated successfully.
Dec  3 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.830075915 +0000 UTC m=+0.286192495 container died c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-45af201ba4a26a7775b20fa7ba24b61eeab903e5c0a268e5fbaab36c1c77762f-merged.mount: Deactivated successfully.
Dec  3 01:35:51 compute-0 podman[310416]: 2025-12-03 01:35:51.909640059 +0000 UTC m=+0.365756569 container remove c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:35:51 compute-0 systemd[1]: libpod-conmon-c9f452f7f4ad306ed58798386c54dca4af3b9a20d865ce001461fa6e8d9d26f7.scope: Deactivated successfully.
Dec  3 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.186936652 +0000 UTC m=+0.096285437 container create cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.151647897 +0000 UTC m=+0.060996722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:35:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:52 compute-0 systemd[1]: Started libpod-conmon-cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0.scope.
Dec  3 01:35:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.382693995 +0000 UTC m=+0.292042810 container init cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:35:52 compute-0 podman[310498]: 2025-12-03 01:35:52.394670292 +0000 UTC m=+0.133885468 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.401122594 +0000 UTC m=+0.310471350 container start cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:35:52 compute-0 podman[310485]: 2025-12-03 01:35:52.405986772 +0000 UTC m=+0.315335627 container attach cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:35:52 compute-0 python3.9[310600]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:53 compute-0 funny_dewdney[310509]: {
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:    "0": [
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:        {
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "devices": [
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "/dev/loop3"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            ],
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_name": "ceph_lv0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_size": "21470642176",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "name": "ceph_lv0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "tags": {
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cluster_name": "ceph",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.crush_device_class": "",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.encrypted": "0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osd_id": "0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.type": "block",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.vdo": "0"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            },
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "type": "block",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "vg_name": "ceph_vg0"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:        }
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:    ],
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:    "1": [
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:        {
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "devices": [
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "/dev/loop4"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            ],
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_name": "ceph_lv1",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_size": "21470642176",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "name": "ceph_lv1",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "tags": {
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cluster_name": "ceph",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.crush_device_class": "",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.encrypted": "0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osd_id": "1",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.type": "block",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.vdo": "0"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            },
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "type": "block",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "vg_name": "ceph_vg1"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:        }
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:    ],
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:    "2": [
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:        {
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "devices": [
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "/dev/loop5"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            ],
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_name": "ceph_lv2",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_size": "21470642176",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "name": "ceph_lv2",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "tags": {
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.cluster_name": "ceph",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.crush_device_class": "",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.encrypted": "0",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osd_id": "2",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.type": "block",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:                "ceph.vdo": "0"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            },
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "type": "block",
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:            "vg_name": "ceph_vg2"
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:        }
Dec  3 01:35:53 compute-0 funny_dewdney[310509]:    ]
Dec  3 01:35:53 compute-0 funny_dewdney[310509]: }
Dec  3 01:35:53 compute-0 podman[310485]: 2025-12-03 01:35:53.188763685 +0000 UTC m=+1.098112440 container died cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:35:53 compute-0 systemd[1]: libpod-cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0.scope: Deactivated successfully.
Dec  3 01:35:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-05c0624ecdf4f131545466e4034efa038f86cc8d44364519b20154a5abbb2e02-merged.mount: Deactivated successfully.
Dec  3 01:35:53 compute-0 podman[310485]: 2025-12-03 01:35:53.293398166 +0000 UTC m=+1.202746921 container remove cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:35:53 compute-0 systemd[1]: libpod-conmon-cb83cba6f4e7bff46c642b2ebc3fd2a4d258d4e4cf6c61bca4ce9d54c34466f0.scope: Deactivated successfully.
Dec  3 01:35:53 compute-0 python3.9[310823]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.456850089 +0000 UTC m=+0.079619397 container create 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.420414941 +0000 UTC m=+0.043184319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:35:54 compute-0 systemd[1]: Started libpod-conmon-683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667.scope.
Dec  3 01:35:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.583206803 +0000 UTC m=+0.205976181 container init 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.595623804 +0000 UTC m=+0.218393112 container start 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:35:54 compute-0 upbeat_ritchie[311044]: 167 167
Dec  3 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.603167395 +0000 UTC m=+0.225936693 container attach 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:35:54 compute-0 systemd[1]: libpod-683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667.scope: Deactivated successfully.
Dec  3 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.605191843 +0000 UTC m=+0.227961121 container died 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b23174b90589d50260ee6a93a5d91e74d0bd61d1df3a576569f8775ff9bcd4c6-merged.mount: Deactivated successfully.
Dec  3 01:35:54 compute-0 podman[310995]: 2025-12-03 01:35:54.670763652 +0000 UTC m=+0.293532940 container remove 683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:35:54 compute-0 systemd[1]: libpod-conmon-683d81b750305265e1ca4823c1bd819f457bb45cc262afa14adeae372432b667.scope: Deactivated successfully.
Dec  3 01:35:54 compute-0 python3.9[311092]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:54 compute-0 podman[311098]: 2025-12-03 01:35:54.93618491 +0000 UTC m=+0.081341916 container create 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:35:54 compute-0 podman[311098]: 2025-12-03 01:35:54.898891808 +0000 UTC m=+0.044048864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:35:55 compute-0 systemd[1]: Started libpod-conmon-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope.
Dec  3 01:35:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:35:55 compute-0 podman[311098]: 2025-12-03 01:35:55.105207759 +0000 UTC m=+0.250364795 container init 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:35:55 compute-0 podman[311098]: 2025-12-03 01:35:55.124670738 +0000 UTC m=+0.269827744 container start 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:35:55 compute-0 podman[311098]: 2025-12-03 01:35:55.130864582 +0000 UTC m=+0.276021548 container attach 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:35:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:56 compute-0 python3.9[311271]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]: {
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "osd_id": 2,
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "type": "bluestore"
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:    },
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "osd_id": 1,
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "type": "bluestore"
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:    },
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "osd_id": 0,
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:        "type": "bluestore"
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]:    }
Dec  3 01:35:56 compute-0 jolly_kowalevski[311121]: }
Dec  3 01:35:56 compute-0 systemd[1]: libpod-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope: Deactivated successfully.
Dec  3 01:35:56 compute-0 systemd[1]: libpod-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope: Consumed 1.193s CPU time.
Dec  3 01:35:56 compute-0 podman[311098]: 2025-12-03 01:35:56.326196424 +0000 UTC m=+1.471353420 container died 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a36df89d02717120687d894fe9690573ee35f56e991a5f2068e9a5fac243241-merged.mount: Deactivated successfully.
Dec  3 01:35:56 compute-0 podman[311098]: 2025-12-03 01:35:56.417690785 +0000 UTC m=+1.562847761 container remove 4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:35:56 compute-0 systemd[1]: libpod-conmon-4e5da291991e9adadf98e4cf74ff93ff841814d949f092144a1805aee88f549e.scope: Deactivated successfully.
Dec  3 01:35:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:35:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:35:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:35:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:35:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 641e3905-4509-4481-acbf-e5af627b8796 does not exist
Dec  3 01:35:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 17132ad1-31b3-47f1-8fe8-6106f6994611 does not exist
Dec  3 01:35:57 compute-0 python3.9[311513]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:35:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:35:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:35:58 compute-0 python3.9[311665]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:35:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:35:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:35:59 compute-0 python3.9[311817]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:35:59.592 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:35:59.593 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:35:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:35:59.593 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:35:59 compute-0 podman[311818]: 2025-12-03 01:35:59.626155738 +0000 UTC m=+0.139889947 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:35:59 compute-0 podman[158098]: time="2025-12-03T01:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:35:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:35:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7280 "" "Go-http-client/1.1"
Dec  3 01:36:00 compute-0 python3.9[311990]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:36:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:36:01 compute-0 openstack_network_exporter[160250]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:36:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:36:01 compute-0 podman[312115]: 2025-12-03 01:36:01.446692757 +0000 UTC m=+0.125197043 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 01:36:01 compute-0 python3.9[312159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:03 compute-0 python3.9[312312]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:04 compute-0 podman[312464]: 2025-12-03 01:36:04.717152478 +0000 UTC m=+0.119688487 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:36:04 compute-0 python3.9[312465]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:06 compute-0 python3.9[312639]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 01:36:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:07 compute-0 python3.9[312791]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:36:08 compute-0 python3.9[312947]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 01:36:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:10 compute-0 python3.9[313097]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:11 compute-0 python3.9[313218]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725769.3572528-1017-159886683660546/.source.xml follow=False _original_basename=secret.xml.j2 checksum=af5c10e13a0d75758c0266fc0df27b554a39904d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:12 compute-0 python3.9[313370]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 3765feb2-36f8-5b86-b74c-64e9221f9c4c#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:36:12 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 01:36:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:12 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 01:36:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:13 compute-0 python3.9[313551]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:18 compute-0 python3.9[314014]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:19 compute-0 python3.9[314167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:20 compute-0 python3.9[314245]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:20 compute-0 podman[314270]: 2025-12-03 01:36:20.879030963 +0000 UTC m=+0.126679685 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:36:20 compute-0 podman[314271]: 2025-12-03 01:36:20.880487504 +0000 UTC m=+0.122072085 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  3 01:36:20 compute-0 podman[314272]: 2025-12-03 01:36:20.892497633 +0000 UTC m=+0.129118104 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  3 01:36:20 compute-0 podman[314273]: 2025-12-03 01:36:20.9267735 +0000 UTC m=+0.155908559 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  3 01:36:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:21 compute-0 python3.9[314479]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:22 compute-0 podman[314603]: 2025-12-03 01:36:22.620462391 +0000 UTC m=+0.142788440 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:36:22 compute-0 python3.9[314648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:23 compute-0 python3.9[314726]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:24 compute-0 python3.9[314878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:25 compute-0 python3.9[314956]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._nqhxvf9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:26 compute-0 python3.9[315108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:36:28
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:36:28 compute-0 python3.9[315186]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:36:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:36:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:29 compute-0 python3.9[315338]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:36:29 compute-0 podman[158098]: time="2025-12-03T01:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7281 "" "Go-http-client/1.1"
Dec  3 01:36:29 compute-0 podman[315340]: 2025-12-03 01:36:29.881058147 +0000 UTC m=+0.128750403 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=)
Dec  3 01:36:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:31 compute-0 python3[315511]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:36:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:36:31 compute-0 openstack_network_exporter[160250]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:36:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:36:31 compute-0 podman[315557]: 2025-12-03 01:36:31.881717687 +0000 UTC m=+0.132971842 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:36:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:32 compute-0 python3.9[315683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:33 compute-0 python3.9[315761]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:34 compute-0 python3.9[315913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:34 compute-0 podman[315963]: 2025-12-03 01:36:34.957058265 +0000 UTC m=+0.112319360 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:36:35 compute-0 python3.9[316015]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:36 compute-0 python3.9[316167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:37 compute-0 python3.9[316245]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:36:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:36:38 compute-0 python3.9[316397]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:39 compute-0 python3.9[316475]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:40 compute-0 python3.9[316627]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:41 compute-0 python3.9[316705]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:42 compute-0 python3.9[316857]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:36:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:44 compute-0 python3.9[317014]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:46 compute-0 python3.9[317168]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:36:47 compute-0 python3.9[317321]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:36:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:48 compute-0 python3.9[317473]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:49 compute-0 python3.9[317625]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:50 compute-0 python3.9[317704]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:51 compute-0 podman[317828]: 2025-12-03 01:36:51.295326959 +0000 UTC m=+0.122282950 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:36:51 compute-0 podman[317830]: 2025-12-03 01:36:51.325392587 +0000 UTC m=+0.139618949 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  3 01:36:51 compute-0 podman[317829]: 2025-12-03 01:36:51.32830686 +0000 UTC m=+0.149554320 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec  3 01:36:51 compute-0 podman[317831]: 2025-12-03 01:36:51.34141807 +0000 UTC m=+0.147363219 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:36:51 compute-0 python3.9[317931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:52 compute-0 python3.9[318015]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:52 compute-0 podman[318100]: 2025-12-03 01:36:52.843595067 +0000 UTC m=+0.097682117 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Dec  3 01:36:53 compute-0 python3.9[318186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:36:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:55 compute-0 python3.9[318266]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:36:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:55 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec  3 01:36:55 compute-0 systemd[1]: session-54.scope: Consumed 3min 3.960s CPU time.
Dec  3 01:36:55 compute-0 systemd-logind[800]: Session 54 logged out. Waiting for processes to exit.
Dec  3 01:36:55 compute-0 systemd-logind[800]: Removed session 54.
Dec  3 01:36:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 978a47cc-5561-4534-951a-b8ec7465d8c2 does not exist
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b653a9cb-3155-4882-b58b-c6273e12270e does not exist
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a9e5e0c0-dd85-45b3-bdcb-4a5289f4a237 does not exist
Dec  3 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:36:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:36:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:36:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.278007454 +0000 UTC m=+0.075072129 container create 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.240763113 +0000 UTC m=+0.037827838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:36:59 compute-0 systemd[1]: Started libpod-conmon-3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da.scope.
Dec  3 01:36:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.467795388 +0000 UTC m=+0.264860103 container init 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.485210439 +0000 UTC m=+0.282275114 container start 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.492235197 +0000 UTC m=+0.289299912 container attach 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:36:59 compute-0 wonderful_franklin[318580]: 167 167
Dec  3 01:36:59 compute-0 systemd[1]: libpod-3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da.scope: Deactivated successfully.
Dec  3 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.497705072 +0000 UTC m=+0.294769777 container died 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-df1e0620b8c754b7ecf1b03d3e643c9b0fe33a2622cdf470e3891e0a4a7c044e-merged.mount: Deactivated successfully.
Dec  3 01:36:59 compute-0 podman[318564]: 2025-12-03 01:36:59.582635748 +0000 UTC m=+0.379700393 container remove 3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:36:59.593 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:36:59.596 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:36:59.596 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:36:59 compute-0 systemd[1]: libpod-conmon-3d3229aefec77ee57c34e98218c944324f40fa6b2a082ab5b7bf4492ba9749da.scope: Deactivated successfully.
Dec  3 01:36:59 compute-0 podman[158098]: time="2025-12-03T01:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7275 "" "Go-http-client/1.1"
Dec  3 01:36:59 compute-0 podman[318603]: 2025-12-03 01:36:59.896110991 +0000 UTC m=+0.096152974 container create f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:36:59 compute-0 podman[318603]: 2025-12-03 01:36:59.861107704 +0000 UTC m=+0.061149737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:36:59 compute-0 systemd[1]: Started libpod-conmon-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope.
Dec  3 01:37:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:00 compute-0 podman[318603]: 2025-12-03 01:37:00.09351052 +0000 UTC m=+0.293552463 container init f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:37:00 compute-0 podman[318617]: 2025-12-03 01:37:00.121827469 +0000 UTC m=+0.159180202 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, release-0.7.12=, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container)
Dec  3 01:37:00 compute-0 podman[318603]: 2025-12-03 01:37:00.12435553 +0000 UTC m=+0.324397513 container start f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 01:37:00 compute-0 podman[318603]: 2025-12-03 01:37:00.136101561 +0000 UTC m=+0.336143524 container attach f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:37:00 compute-0 systemd-logind[800]: New session 55 of user zuul.
Dec  3 01:37:00 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec  3 01:37:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:01 compute-0 cranky_buck[318625]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:37:01 compute-0 cranky_buck[318625]: --> relative data size: 1.0
Dec  3 01:37:01 compute-0 cranky_buck[318625]: --> All data devices are unavailable
Dec  3 01:37:01 compute-0 systemd[1]: libpod-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope: Deactivated successfully.
Dec  3 01:37:01 compute-0 systemd[1]: libpod-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope: Consumed 1.221s CPU time.
Dec  3 01:37:01 compute-0 podman[318603]: 2025-12-03 01:37:01.39514131 +0000 UTC m=+1.595183283 container died f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:37:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:37:01 compute-0 openstack_network_exporter[160250]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:37:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-24423ead940ec6abd2029d49a1d59e59e797e74191d1c12a64014f44f47f4952-merged.mount: Deactivated successfully.
Dec  3 01:37:01 compute-0 podman[318603]: 2025-12-03 01:37:01.477126963 +0000 UTC m=+1.677168906 container remove f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_buck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:37:01 compute-0 systemd[1]: libpod-conmon-f613d31647389e6b3616c8cd74aac3f39b5a5cb9abca992bdcb59b0eaa44d1cb.scope: Deactivated successfully.
Dec  3 01:37:02 compute-0 podman[318907]: 2025-12-03 01:37:02.065692547 +0000 UTC m=+0.116301182 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:37:02 compute-0 python3.9[318883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:37:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.651026919 +0000 UTC m=+0.084802093 container create 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.622892845 +0000 UTC m=+0.056668029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:37:02 compute-0 systemd[1]: Started libpod-conmon-308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef.scope.
Dec  3 01:37:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.808323686 +0000 UTC m=+0.242098930 container init 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.818260357 +0000 UTC m=+0.252035531 container start 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.825148511 +0000 UTC m=+0.258923695 container attach 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:37:02 compute-0 agitated_zhukovsky[319032]: 167 167
Dec  3 01:37:02 compute-0 systemd[1]: libpod-308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef.scope: Deactivated successfully.
Dec  3 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.834410772 +0000 UTC m=+0.268185946 container died 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-61be21ec0b5ba0d82688018c030f0d333d702f825d298d65c72ef6d0485fabad-merged.mount: Deactivated successfully.
Dec  3 01:37:02 compute-0 podman[318998]: 2025-12-03 01:37:02.923596398 +0000 UTC m=+0.357371542 container remove 308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:37:02 compute-0 systemd[1]: libpod-conmon-308eaf34d16fefb0265d7808f4781a9e889fdcf682830a8a7e7198b26b5e8aef.scope: Deactivated successfully.
Dec  3 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.184968922 +0000 UTC m=+0.076324594 container create 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.152042903 +0000 UTC m=+0.043398625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:37:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:03 compute-0 systemd[1]: Started libpod-conmon-103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc.scope.
Dec  3 01:37:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.367354867 +0000 UTC m=+0.258710539 container init 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.401226553 +0000 UTC m=+0.292582195 container start 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec  3 01:37:03 compute-0 podman[319106]: 2025-12-03 01:37:03.408068876 +0000 UTC m=+0.299424548 container attach 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:37:03 compute-0 python3.9[319200]: ansible-ansible.builtin.service_facts Invoked
Dec  3 01:37:03 compute-0 network[319217]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 01:37:03 compute-0 network[319218]: 'network-scripts' will be removed from distribution in near future.
Dec  3 01:37:03 compute-0 network[319219]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 01:37:04 compute-0 sleepy_keller[319121]: {
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:    "0": [
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:        {
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "devices": [
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "/dev/loop3"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            ],
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_name": "ceph_lv0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_size": "21470642176",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "name": "ceph_lv0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "tags": {
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cluster_name": "ceph",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.crush_device_class": "",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.encrypted": "0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osd_id": "0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.type": "block",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.vdo": "0"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            },
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "type": "block",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "vg_name": "ceph_vg0"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:        }
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:    ],
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:    "1": [
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:        {
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "devices": [
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "/dev/loop4"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            ],
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_name": "ceph_lv1",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_size": "21470642176",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "name": "ceph_lv1",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "tags": {
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cluster_name": "ceph",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.crush_device_class": "",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.encrypted": "0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osd_id": "1",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.type": "block",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.vdo": "0"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            },
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "type": "block",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "vg_name": "ceph_vg1"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:        }
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:    ],
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:    "2": [
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:        {
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "devices": [
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "/dev/loop5"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            ],
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_name": "ceph_lv2",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_size": "21470642176",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "name": "ceph_lv2",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "tags": {
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.cluster_name": "ceph",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.crush_device_class": "",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.encrypted": "0",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osd_id": "2",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.type": "block",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:                "ceph.vdo": "0"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            },
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "type": "block",
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:            "vg_name": "ceph_vg2"
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:        }
Dec  3 01:37:04 compute-0 sleepy_keller[319121]:    ]
Dec  3 01:37:04 compute-0 sleepy_keller[319121]: }
Dec  3 01:37:04 compute-0 podman[319106]: 2025-12-03 01:37:04.264858677 +0000 UTC m=+1.156214369 container died 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:37:04 compute-0 systemd[1]: libpod-103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc.scope: Deactivated successfully.
Dec  3 01:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-29632595b22dc54467d2fa7afb5b1fedab13b6a5b0147647b0953ab1d6644215-merged.mount: Deactivated successfully.
Dec  3 01:37:04 compute-0 podman[319106]: 2025-12-03 01:37:04.99269898 +0000 UTC m=+1.884054622 container remove 103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:37:05 compute-0 systemd[1]: libpod-conmon-103dcf19711bab371769877d45d671a9511bc9e9fdde6576e434ca6ba458abbc.scope: Deactivated successfully.
Dec  3 01:37:05 compute-0 podman[319246]: 2025-12-03 01:37:05.112422447 +0000 UTC m=+0.097242244 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:37:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.018354154 +0000 UTC m=+0.094602039 container create d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:05.978691196 +0000 UTC m=+0.054939151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:37:06 compute-0 systemd[1]: Started libpod-conmon-d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec.scope.
Dec  3 01:37:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.164804716 +0000 UTC m=+0.241052661 container init d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.178834212 +0000 UTC m=+0.255082107 container start d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.185394786 +0000 UTC m=+0.261642731 container attach d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:37:06 compute-0 awesome_beaver[319454]: 167 167
Dec  3 01:37:06 compute-0 systemd[1]: libpod-d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec.scope: Deactivated successfully.
Dec  3 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.189633175 +0000 UTC m=+0.265881070 container died d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8341c4b144c07df8c3a3dd334fd784e17b4b89c4a60493e46a55c04e060f19c-merged.mount: Deactivated successfully.
Dec  3 01:37:06 compute-0 podman[319433]: 2025-12-03 01:37:06.260723321 +0000 UTC m=+0.336971216 container remove d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:37:06 compute-0 systemd[1]: libpod-conmon-d8e083fab70817cdedf9bd3680a6339f502ab511989e5097113a42a6e6cb70ec.scope: Deactivated successfully.
Dec  3 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.536929123 +0000 UTC m=+0.092721917 container create ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.506811693 +0000 UTC m=+0.062604537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:37:06 compute-0 systemd[1]: Started libpod-conmon-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope.
Dec  3 01:37:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.725989667 +0000 UTC m=+0.281782471 container init ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.756838657 +0000 UTC m=+0.312631441 container start ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:37:06 compute-0 podman[319487]: 2025-12-03 01:37:06.762925498 +0000 UTC m=+0.318718262 container attach ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:37:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:07 compute-0 relaxed_wright[319510]: {
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "osd_id": 2,
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "type": "bluestore"
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:    },
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "osd_id": 1,
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "type": "bluestore"
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:    },
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "osd_id": 0,
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:        "type": "bluestore"
Dec  3 01:37:07 compute-0 relaxed_wright[319510]:    }
Dec  3 01:37:07 compute-0 relaxed_wright[319510]: }
Dec  3 01:37:07 compute-0 systemd[1]: libpod-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope: Deactivated successfully.
Dec  3 01:37:07 compute-0 podman[319487]: 2025-12-03 01:37:07.997206639 +0000 UTC m=+1.552999403 container died ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:37:07 compute-0 systemd[1]: libpod-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope: Consumed 1.230s CPU time.
Dec  3 01:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dc94c766df843d8a645829a0bfb9d0891c21a95552ffad435b178f13464e197-merged.mount: Deactivated successfully.
Dec  3 01:37:08 compute-0 podman[319487]: 2025-12-03 01:37:08.10641313 +0000 UTC m=+1.662205894 container remove ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wright, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:37:08 compute-0 systemd[1]: libpod-conmon-ee4d811b41c8002f6c990c07d686f12f08a152f44b4f8c805ec88ef5650cbf0b.scope: Deactivated successfully.
Dec  3 01:37:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:37:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:37:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:37:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:37:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f30f5015-d2de-438e-9e16-be90424f0b83 does not exist
Dec  3 01:37:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d38593e6-9a09-47ce-82d0-045c817c70ea does not exist
Dec  3 01:37:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:37:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:37:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:10 compute-0 python3.9[319817]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 01:37:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:11 compute-0 python3.9[319901]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:37:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:13 compute-0 python3.9[320054]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:37:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:15 compute-0 python3.9[320206]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:37:17 compute-0 python3.9[320359]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:37:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.280799) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837280869, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1248, "num_deletes": 507, "total_data_size": 1436496, "memory_usage": 1470752, "flush_reason": "Manual Compaction"}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837292389, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1412035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13540, "largest_seqno": 14787, "table_properties": {"data_size": 1406542, "index_size": 2441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14176, "raw_average_key_size": 17, "raw_value_size": 1393506, "raw_average_value_size": 1761, "num_data_blocks": 112, "num_entries": 791, "num_filter_entries": 791, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725742, "oldest_key_time": 1764725742, "file_creation_time": 1764725837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11652 microseconds, and 5032 cpu microseconds.
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.292451) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1412035 bytes OK
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.292482) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.294518) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.294552) EVENT_LOG_v1 {"time_micros": 1764725837294548, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.294574) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1429749, prev total WAL file size 1429749, number of live WAL files 2.
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.295644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1378KB)], [32(7389KB)]
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837295739, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8978582, "oldest_snapshot_seqno": -1}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3765 keys, 7052455 bytes, temperature: kUnknown
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837352741, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7052455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7025567, "index_size": 16347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92372, "raw_average_key_size": 24, "raw_value_size": 6955693, "raw_average_value_size": 1847, "num_data_blocks": 694, "num_entries": 3765, "num_filter_entries": 3765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764725837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.353156) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7052455 bytes
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.358271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.2 rd, 123.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 4792, records dropped: 1027 output_compression: NoCompression
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.358305) EVENT_LOG_v1 {"time_micros": 1764725837358290, "job": 14, "event": "compaction_finished", "compaction_time_micros": 57115, "compaction_time_cpu_micros": 34176, "output_level": 6, "num_output_files": 1, "total_output_size": 7052455, "num_input_records": 4792, "num_output_records": 3765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837359027, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764725837361627, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.295353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:37:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:37:17.361836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:37:18 compute-0 python3.9[320511]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:37:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:19 compute-0 python3.9[320666]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:37:20 compute-0 python3.9[320790]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725838.6386387-95-52272453686439/.source.iscsi _original_basename=.c0gcynkp follow=False checksum=45ae4747473aca1feb0876e067fc4836f7675e84 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:21 compute-0 podman[320914]: 2025-12-03 01:37:21.666312711 +0000 UTC m=+0.105933579 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:37:21 compute-0 podman[320916]: 2025-12-03 01:37:21.689384122 +0000 UTC m=+0.120287644 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible)
Dec  3 01:37:21 compute-0 podman[320915]: 2025-12-03 01:37:21.705416735 +0000 UTC m=+0.142948344 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=edpm, version=9.6, architecture=x86_64, vcs-type=git)
Dec  3 01:37:21 compute-0 podman[320917]: 2025-12-03 01:37:21.721864949 +0000 UTC m=+0.141726170 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 01:37:21 compute-0 python3.9[321019]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:23 compute-0 podman[321179]: 2025-12-03 01:37:23.081179306 +0000 UTC m=+0.137616373 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2)
Dec  3 01:37:23 compute-0 python3.9[321180]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:24 compute-0 python3.9[321350]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:37:24 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  3 01:37:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:27 compute-0 python3.9[321508]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:37:27 compute-0 systemd[1]: Reloading.
Dec  3 01:37:27 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:37:27 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:37:27 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  3 01:37:27 compute-0 systemd[1]: Starting Open-iSCSI...
Dec  3 01:37:27 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec  3 01:37:27 compute-0 systemd[1]: Started Open-iSCSI.
Dec  3 01:37:28 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  3 01:37:28 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:37:28
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms']
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:37:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:37:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:29 compute-0 python3.9[321709]: ansible-ansible.builtin.service_facts Invoked
Dec  3 01:37:29 compute-0 network[321726]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 01:37:29 compute-0 network[321727]: 'network-scripts' will be removed from distribution in near future.
Dec  3 01:37:29 compute-0 network[321728]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 01:37:29 compute-0 podman[158098]: time="2025-12-03T01:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7294 "" "Go-http-client/1.1"
Dec  3 01:37:30 compute-0 podman[321735]: 2025-12-03 01:37:30.654991037 +0000 UTC m=+0.178035081 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  3 01:37:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:37:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:37:31 compute-0 openstack_network_exporter[160250]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:37:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:37:32 compute-0 podman[321799]: 2025-12-03 01:37:32.273840776 +0000 UTC m=+0.143531983 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:37:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:35 compute-0 podman[322008]: 2025-12-03 01:37:35.687141835 +0000 UTC m=+0.155273202 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:37:35 compute-0 python3.9[322059]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 01:37:37 compute-0 python3.9[322211]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  3 01:37:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:37:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:37:39 compute-0 python3.9[322367]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:37:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:40 compute-0 python3.9[322490]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725857.5608733-172-76672569751921/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3309 writes, 14K keys, 3309 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3309 writes, 3309 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1272 writes, 5778 keys, 1272 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s#012Interval WAL: 1272 writes, 1272 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.9      0.16              0.07         7    0.022       0      0       0.0       0.0#012  L6      1/0    6.73 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    124.6    102.9      0.39              0.20         6    0.065     24K   3201       0.0       0.0#012 Sum      1/0    6.73 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7     89.0    101.2      0.55              0.28        13    0.042     24K   3201       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    108.9    109.8      0.31              0.16         8    0.039     17K   2472       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    124.6    102.9      0.39              0.20         6    0.065     24K   3201       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.4      0.15              0.07         6    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 1.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(107,1.29 MB,0.417283%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,144.80 KB,0.0459101%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.974 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.975 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.981 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.982 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.995 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:37:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:37:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:42 compute-0 python3.9[322643]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:43 compute-0 python3.9[322795]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:37:43 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  3 01:37:43 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec  3 01:37:43 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec  3 01:37:43 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec  3 01:37:43 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec  3 01:37:45 compute-0 python3.9[322951]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:37:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:46 compute-0 python3.9[323103]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:37:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:47 compute-0 python3.9[323255]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:37:48 compute-0 python3.9[323409]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:37:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:49 compute-0 python3.9[323532]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725867.685499-230-1744332121386/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:50 compute-0 python3.9[323685]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:37:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:51 compute-0 python3.9[323838]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:51 compute-0 podman[323844]: 2025-12-03 01:37:51.845605409 +0000 UTC m=+0.100047221 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:37:51 compute-0 podman[323848]: 2025-12-03 01:37:51.879018878 +0000 UTC m=+0.118354376 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 01:37:51 compute-0 podman[323854]: 2025-12-03 01:37:51.887839625 +0000 UTC m=+0.122893743 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 01:37:51 compute-0 podman[323867]: 2025-12-03 01:37:51.910192223 +0000 UTC m=+0.131080043 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  3 01:37:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:53 compute-0 podman[324042]: 2025-12-03 01:37:53.816391843 +0000 UTC m=+0.169161923 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:37:54 compute-0 python3.9[324089]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:55 compute-0 python3.9[324242]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:57 compute-0 python3.9[324394]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:37:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:58 compute-0 python3.9[324546]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:37:59 compute-0 python3.9[324698]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:37:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:37:59.595 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:37:59.597 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:37:59.597 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:37:59 compute-0 podman[158098]: time="2025-12-03T01:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7272 "" "Go-http-client/1.1"
Dec  3 01:38:00 compute-0 python3.9[324850]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:00 compute-0 podman[324876]: 2025-12-03 01:38:00.882039426 +0000 UTC m=+0.135816466 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, config_id=edpm, release-0.7.12=, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 01:38:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:38:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:38:01 compute-0 openstack_network_exporter[160250]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:38:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:38:01 compute-0 python3.9[325023]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:38:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:02 compute-0 podman[325149]: 2025-12-03 01:38:02.437270338 +0000 UTC m=+0.100350319 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:38:02 compute-0 python3.9[325195]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:03 compute-0 python3.9[325348]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:38:04 compute-0 python3.9[325500]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:05 compute-0 podman[325561]: 2025-12-03 01:38:05.864685634 +0000 UTC m=+0.115295149 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:38:06 compute-0 python3.9[325589]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:38:07 compute-0 python3.9[325756]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:08 compute-0 python3.9[325834]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:38:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:09 compute-0 python3.9[326102]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:38:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d313723e-c6d8-434a-b9b0-c9bd0e15e807 does not exist
Dec  3 01:38:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cd777488-7c3d-4792-9ac2-af2e5ae300e2 does not exist
Dec  3 01:38:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 29fbddfb-77ba-4108-b0b3-7dfe2d6b498b does not exist
Dec  3 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:38:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:38:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:38:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:38:10 compute-0 python3.9[326379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.768993081 +0000 UTC m=+0.094288229 container create 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.735647944 +0000 UTC m=+0.060943152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:38:10 compute-0 systemd[1]: Started libpod-conmon-3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76.scope.
Dec  3 01:38:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.924778577 +0000 UTC m=+0.250073765 container init 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.943097641 +0000 UTC m=+0.268392779 container start 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.950361145 +0000 UTC m=+0.275656333 container attach 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:38:10 compute-0 eloquent_cray[326445]: 167 167
Dec  3 01:38:10 compute-0 systemd[1]: libpod-3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76.scope: Deactivated successfully.
Dec  3 01:38:10 compute-0 podman[326407]: 2025-12-03 01:38:10.956290112 +0000 UTC m=+0.281585250 container died 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-54421240329fc95f60bb31752c5b0098ed59f7c627b282ebc0cf89c541440b3d-merged.mount: Deactivated successfully.
Dec  3 01:38:11 compute-0 podman[326407]: 2025-12-03 01:38:11.033666215 +0000 UTC m=+0.358961343 container remove 3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:38:11 compute-0 systemd[1]: libpod-conmon-3acb9bd0f1c08191a8729dfdc4dbdfcae40578867755d9447547208dd3329a76.scope: Deactivated successfully.
Dec  3 01:38:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.354951169 +0000 UTC m=+0.106167973 container create af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.314377409 +0000 UTC m=+0.065594243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:38:11 compute-0 python3.9[326521]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:11 compute-0 systemd[1]: Started libpod-conmon-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope.
Dec  3 01:38:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.494411356 +0000 UTC m=+0.245628150 container init af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.512118483 +0000 UTC m=+0.263335287 container start af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:38:11 compute-0 podman[326524]: 2025-12-03 01:38:11.517660629 +0000 UTC m=+0.268877443 container attach af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:38:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:12 compute-0 python3.9[326700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:12 compute-0 great_nash[326540]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:38:12 compute-0 great_nash[326540]: --> relative data size: 1.0
Dec  3 01:38:12 compute-0 great_nash[326540]: --> All data devices are unavailable
Dec  3 01:38:12 compute-0 systemd[1]: libpod-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope: Deactivated successfully.
Dec  3 01:38:12 compute-0 systemd[1]: libpod-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope: Consumed 1.184s CPU time.
Dec  3 01:38:12 compute-0 conmon[326540]: conmon af09ba3258194515247d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope/container/memory.events
Dec  3 01:38:12 compute-0 podman[326524]: 2025-12-03 01:38:12.793014801 +0000 UTC m=+1.544231635 container died af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-038460924a241a866b63701b0cebb9b00a593df52a13d38b760c1f2259fb6c15-merged.mount: Deactivated successfully.
Dec  3 01:38:12 compute-0 podman[326524]: 2025-12-03 01:38:12.9058732 +0000 UTC m=+1.657090004 container remove af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:38:12 compute-0 systemd[1]: libpod-conmon-af09ba3258194515247d713912d4df51bf7a01778ae49696d694887b510c4e46.scope: Deactivated successfully.
Dec  3 01:38:13 compute-0 python3.9[326811]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.051641892 +0000 UTC m=+0.092005095 container create e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.019822908 +0000 UTC m=+0.060186192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:38:14 compute-0 systemd[1]: Started libpod-conmon-e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa.scope.
Dec  3 01:38:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.189874564 +0000 UTC m=+0.230237817 container init e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.207272002 +0000 UTC m=+0.247635185 container start e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.211517852 +0000 UTC m=+0.251881135 container attach e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:38:14 compute-0 trusting_saha[327116]: 167 167
Dec  3 01:38:14 compute-0 systemd[1]: libpod-e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa.scope: Deactivated successfully.
Dec  3 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.221132402 +0000 UTC m=+0.261495635 container died e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4075aa79130f8ae54aa22882916854fa73d4e08a1fd8f82915826327246fc852-merged.mount: Deactivated successfully.
Dec  3 01:38:14 compute-0 podman[327070]: 2025-12-03 01:38:14.30332409 +0000 UTC m=+0.343687313 container remove e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:38:14 compute-0 systemd[1]: libpod-conmon-e0ad029f6ed82db12ca1917e17d941f1bf9891d9e7a29ccb95c21579c0ad1cfa.scope: Deactivated successfully.
Dec  3 01:38:14 compute-0 python3.9[327115]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:38:14 compute-0 systemd[1]: Reloading.
Dec  3 01:38:14 compute-0 podman[327140]: 2025-12-03 01:38:14.579005974 +0000 UTC m=+0.073002302 container create fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:38:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:38:14 compute-0 podman[327140]: 2025-12-03 01:38:14.547137858 +0000 UTC m=+0.041134186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:38:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:38:14 compute-0 systemd[1]: Started libpod-conmon-fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698.scope.
Dec  3 01:38:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:15 compute-0 podman[327140]: 2025-12-03 01:38:15.059476499 +0000 UTC m=+0.553472827 container init fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:38:15 compute-0 podman[327140]: 2025-12-03 01:38:15.0976381 +0000 UTC m=+0.591634418 container start fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:38:15 compute-0 podman[327140]: 2025-12-03 01:38:15.103596568 +0000 UTC m=+0.597592876 container attach fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:38:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]: {
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:    "0": [
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:        {
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "devices": [
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "/dev/loop3"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            ],
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_name": "ceph_lv0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_size": "21470642176",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "name": "ceph_lv0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "tags": {
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cluster_name": "ceph",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.crush_device_class": "",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.encrypted": "0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osd_id": "0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.type": "block",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.vdo": "0"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            },
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "type": "block",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "vg_name": "ceph_vg0"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:        }
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:    ],
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:    "1": [
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:        {
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "devices": [
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "/dev/loop4"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            ],
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_name": "ceph_lv1",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_size": "21470642176",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "name": "ceph_lv1",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "tags": {
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cluster_name": "ceph",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.crush_device_class": "",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.encrypted": "0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osd_id": "1",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.type": "block",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.vdo": "0"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            },
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "type": "block",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "vg_name": "ceph_vg1"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:        }
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:    ],
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:    "2": [
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:        {
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "devices": [
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "/dev/loop5"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            ],
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_name": "ceph_lv2",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_size": "21470642176",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "name": "ceph_lv2",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "tags": {
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.cluster_name": "ceph",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.crush_device_class": "",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.encrypted": "0",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osd_id": "2",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.type": "block",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:                "ceph.vdo": "0"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            },
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "type": "block",
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:            "vg_name": "ceph_vg2"
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:        }
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]:    ]
Dec  3 01:38:15 compute-0 strange_brahmagupta[327191]: }
Dec  3 01:38:15 compute-0 systemd[1]: libpod-fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698.scope: Deactivated successfully.
Dec  3 01:38:15 compute-0 podman[327350]: 2025-12-03 01:38:15.982110083 +0000 UTC m=+0.050511160 container died fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-df27e3c4d7aece84573a730eaf3edbf93e99b18275fee26843dfa1e1c8e824cf-merged.mount: Deactivated successfully.
Dec  3 01:38:16 compute-0 podman[327350]: 2025-12-03 01:38:16.086152705 +0000 UTC m=+0.154553712 container remove fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:38:16 compute-0 systemd[1]: libpod-conmon-fe0777c4aa5e1f324f3558d518581ee1c01b5d5c26f538ec5dab14817904a698.scope: Deactivated successfully.
Dec  3 01:38:16 compute-0 python3.9[327357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:16 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 01:38:16 compute-0 python3.9[327544]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.200184156 +0000 UTC m=+0.084067423 container create a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.16187812 +0000 UTC m=+0.045761437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:38:17 compute-0 systemd[1]: Started libpod-conmon-a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3.scope.
Dec  3 01:38:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.337109831 +0000 UTC m=+0.220993108 container init a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.349307944 +0000 UTC m=+0.233191211 container start a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.35591877 +0000 UTC m=+0.239802037 container attach a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:38:17 compute-0 vibrant_kalam[327649]: 167 167
Dec  3 01:38:17 compute-0 systemd[1]: libpod-a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3.scope: Deactivated successfully.
Dec  3 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.358876013 +0000 UTC m=+0.242759290 container died a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 01:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d92bdcdb2c213f1d4be0403496cb30611f666fc6b3835b5110900ccb0cd9d025-merged.mount: Deactivated successfully.
Dec  3 01:38:17 compute-0 podman[327610]: 2025-12-03 01:38:17.42429775 +0000 UTC m=+0.308181007 container remove a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 01:38:17 compute-0 systemd[1]: libpod-conmon-a5d6ec123394c376f1565efbe01c7b8db27cba173e3fd226868705c36ea242f3.scope: Deactivated successfully.
Dec  3 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.61690222 +0000 UTC m=+0.066807357 container create 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.583990856 +0000 UTC m=+0.033896033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:38:17 compute-0 systemd[1]: Started libpod-conmon-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope.
Dec  3 01:38:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.777826989 +0000 UTC m=+0.227732106 container init 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec  3 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.807435431 +0000 UTC m=+0.257340568 container start 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:38:17 compute-0 podman[327723]: 2025-12-03 01:38:17.816877726 +0000 UTC m=+0.266782863 container attach 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:38:18 compute-0 python3.9[327797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:19 compute-0 pedantic_cray[327764]: {
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "osd_id": 2,
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "type": "bluestore"
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:    },
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "osd_id": 1,
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "type": "bluestore"
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:    },
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "osd_id": 0,
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:        "type": "bluestore"
Dec  3 01:38:19 compute-0 pedantic_cray[327764]:    }
Dec  3 01:38:19 compute-0 pedantic_cray[327764]: }
Dec  3 01:38:19 compute-0 systemd[1]: libpod-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope: Deactivated successfully.
Dec  3 01:38:19 compute-0 podman[327723]: 2025-12-03 01:38:19.063350126 +0000 UTC m=+1.513255233 container died 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:38:19 compute-0 systemd[1]: libpod-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope: Consumed 1.252s CPU time.
Dec  3 01:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-457e210623ab732be736a40114591d3353af672aca42c3eefa6ace4aff2a77d7-merged.mount: Deactivated successfully.
Dec  3 01:38:19 compute-0 podman[327723]: 2025-12-03 01:38:19.164024074 +0000 UTC m=+1.613929201 container remove 4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:38:19 compute-0 systemd[1]: libpod-conmon-4b76f0bd90f4fb0a1d2612627f1a82cbaefddb5924047cdbfdfd7fe6fa9f5522.scope: Deactivated successfully.
Dec  3 01:38:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:38:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:38:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:38:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:38:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 40928c2b-2f82-4909-81e1-3e83bda56c4c does not exist
Dec  3 01:38:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7908a63-6ae5-49a0-9cca-5b2cbb468db6 does not exist
Dec  3 01:38:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:19 compute-0 python3.9[327918]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:38:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:38:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:21 compute-0 python3.9[328121]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:38:21 compute-0 systemd[1]: Reloading.
Dec  3 01:38:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:38:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:38:22 compute-0 systemd[1]: Starting Create netns directory...
Dec  3 01:38:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 01:38:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 01:38:22 compute-0 systemd[1]: Finished Create netns directory.
Dec  3 01:38:22 compute-0 podman[328159]: 2025-12-03 01:38:22.338331231 +0000 UTC m=+0.100275327 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 01:38:22 compute-0 podman[328158]: 2025-12-03 01:38:22.352596602 +0000 UTC m=+0.118896110 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  3 01:38:22 compute-0 podman[328157]: 2025-12-03 01:38:22.357416217 +0000 UTC m=+0.123336705 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:38:22 compute-0 podman[328160]: 2025-12-03 01:38:22.377583494 +0000 UTC m=+0.141696341 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 01:38:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:23 compute-0 python3.9[328398]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:38:24 compute-0 podman[328522]: 2025-12-03 01:38:24.550430714 +0000 UTC m=+0.173762272 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm)
Dec  3 01:38:24 compute-0 python3.9[328570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:25 compute-0 python3.9[328694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764725903.8757424-437-196488983808366/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:38:27 compute-0 python3.9[328846]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:38:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:28 compute-0 python3.9[328998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:38:28
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups']
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:38:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:38:29 compute-0 python3.9[329121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725907.432187-462-209297975865546/.source.json _original_basename=.mprbrqzm follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:29 compute-0 podman[158098]: time="2025-12-03T01:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35732 "" "Go-http-client/1.1"
Dec  3 01:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7282 "" "Go-http-client/1.1"
Dec  3 01:38:30 compute-0 python3.9[329273]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:31 compute-0 podman[329397]: 2025-12-03 01:38:31.144930735 +0000 UTC m=+0.144033617 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Dec  3 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:38:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:38:31 compute-0 openstack_network_exporter[160250]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:38:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:38:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:32 compute-0 podman[329568]: 2025-12-03 01:38:32.859122831 +0000 UTC m=+0.107307395 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  3 01:38:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:34 compute-0 python3.9[329737]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  3 01:38:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:35 compute-0 python3.9[329889]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 01:38:36 compute-0 podman[329966]: 2025-12-03 01:38:36.865235522 +0000 UTC m=+0.117103140 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:38:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:37 compute-0 python3.9[330065]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 01:38:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:38:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:38:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:39 compute-0 python3[330242]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 01:38:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:41 compute-0 podman[330253]: 2025-12-03 01:38:41.594750381 +0000 UTC m=+1.669979056 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 01:38:41 compute-0 podman[330307]: 2025-12-03 01:38:41.842933002 +0000 UTC m=+0.089066773 container create df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 01:38:41 compute-0 podman[330307]: 2025-12-03 01:38:41.797400363 +0000 UTC m=+0.043534184 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 01:38:41 compute-0 python3[330242]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 01:38:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:43 compute-0 python3.9[330494]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:38:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:44 compute-0 python3.9[330648]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:45 compute-0 python3.9[330724]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:38:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:46 compute-0 python3.9[330875]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764725925.4086745-550-194272009794186/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:47 compute-0 python3.9[330951]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:38:47 compute-0 systemd[1]: Reloading.
Dec  3 01:38:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:38:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:38:48 compute-0 python3.9[331064]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:38:48 compute-0 systemd[1]: Reloading.
Dec  3 01:38:48 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:38:48 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:38:49 compute-0 systemd[1]: Starting multipathd container...
Dec  3 01:38:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:49 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.
Dec  3 01:38:49 compute-0 podman[331105]: 2025-12-03 01:38:49.506323755 +0000 UTC m=+0.260316873 container init df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 01:38:49 compute-0 multipathd[331119]: + sudo -E kolla_set_configs
Dec  3 01:38:49 compute-0 podman[331105]: 2025-12-03 01:38:49.552906143 +0000 UTC m=+0.306899271 container start df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 01:38:49 compute-0 podman[331105]: multipathd
Dec  3 01:38:49 compute-0 systemd[1]: Started multipathd container.
Dec  3 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Validating config file
Dec  3 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 01:38:49 compute-0 multipathd[331119]: INFO:__main__:Writing out command to execute
Dec  3 01:38:49 compute-0 multipathd[331119]: ++ cat /run_command
Dec  3 01:38:49 compute-0 multipathd[331119]: + CMD='/usr/sbin/multipathd -d'
Dec  3 01:38:49 compute-0 multipathd[331119]: + ARGS=
Dec  3 01:38:49 compute-0 multipathd[331119]: + sudo kolla_copy_cacerts
Dec  3 01:38:49 compute-0 podman[331127]: 2025-12-03 01:38:49.661881644 +0000 UTC m=+0.085245426 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 01:38:49 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-d9d42129a6f5ed5.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 01:38:49 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-d9d42129a6f5ed5.service: Failed with result 'exit-code'.
Dec  3 01:38:49 compute-0 multipathd[331119]: + [[ ! -n '' ]]
Dec  3 01:38:49 compute-0 multipathd[331119]: + . kolla_extend_start
Dec  3 01:38:49 compute-0 multipathd[331119]: Running command: '/usr/sbin/multipathd -d'
Dec  3 01:38:49 compute-0 multipathd[331119]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  3 01:38:49 compute-0 multipathd[331119]: + umask 0022
Dec  3 01:38:49 compute-0 multipathd[331119]: + exec /usr/sbin/multipathd -d
Dec  3 01:38:49 compute-0 multipathd[331119]: 4769.378834 | --------start up--------
Dec  3 01:38:49 compute-0 multipathd[331119]: 4769.378870 | read /etc/multipath.conf
Dec  3 01:38:49 compute-0 multipathd[331119]: 4769.391541 | path checkers start up
Dec  3 01:38:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:51 compute-0 python3.9[331308]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:38:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:52 compute-0 podman[331434]: 2025-12-03 01:38:52.514241118 +0000 UTC m=+0.096977175 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:38:52 compute-0 podman[331435]: 2025-12-03 01:38:52.52960155 +0000 UTC m=+0.101808691 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Dec  3 01:38:52 compute-0 podman[331436]: 2025-12-03 01:38:52.54742295 +0000 UTC m=+0.118292203 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:38:52 compute-0 podman[331438]: 2025-12-03 01:38:52.561121075 +0000 UTC m=+0.118442118 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:38:52 compute-0 python3.9[331535]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:38:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:54 compute-0 python3.9[331709]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:38:54 compute-0 systemd[1]: Stopping multipathd container...
Dec  3 01:38:54 compute-0 multipathd[331119]: 4773.882690 | exit (signal)
Dec  3 01:38:54 compute-0 multipathd[331119]: 4773.882906 | --------shut down-------
Dec  3 01:38:54 compute-0 systemd[1]: libpod-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec  3 01:38:54 compute-0 podman[331713]: 2025-12-03 01:38:54.256069671 +0000 UTC m=+0.127784730 container died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:38:54 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-d9d42129a6f5ed5.timer: Deactivated successfully.
Dec  3 01:38:54 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.
Dec  3 01:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-userdata-shm.mount: Deactivated successfully.
Dec  3 01:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab-merged.mount: Deactivated successfully.
Dec  3 01:38:54 compute-0 podman[331713]: 2025-12-03 01:38:54.36178919 +0000 UTC m=+0.233504199 container cleanup df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 01:38:54 compute-0 podman[331713]: multipathd
Dec  3 01:38:54 compute-0 podman[331742]: multipathd
Dec  3 01:38:54 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  3 01:38:54 compute-0 systemd[1]: Stopped multipathd container.
Dec  3 01:38:54 compute-0 systemd[1]: Starting multipathd container...
Dec  3 01:38:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a0ca4b56dffc6a4e58bda68c2eec33330d1dbcd40d12da48433ad0c5e77eab/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 01:38:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.
Dec  3 01:38:54 compute-0 podman[331752]: 2025-12-03 01:38:54.653093392 +0000 UTC m=+0.173490874 container init df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  3 01:38:54 compute-0 multipathd[331766]: + sudo -E kolla_set_configs
Dec  3 01:38:54 compute-0 podman[331752]: 2025-12-03 01:38:54.700418831 +0000 UTC m=+0.220816293 container start df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  3 01:38:54 compute-0 podman[331752]: multipathd
Dec  3 01:38:54 compute-0 systemd[1]: Started multipathd container.
Dec  3 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Validating config file
Dec  3 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 01:38:54 compute-0 multipathd[331766]: INFO:__main__:Writing out command to execute
Dec  3 01:38:54 compute-0 podman[331769]: 2025-12-03 01:38:54.777778954 +0000 UTC m=+0.146011962 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 01:38:54 compute-0 multipathd[331766]: ++ cat /run_command
Dec  3 01:38:54 compute-0 multipathd[331766]: + CMD='/usr/sbin/multipathd -d'
Dec  3 01:38:54 compute-0 multipathd[331766]: + ARGS=
Dec  3 01:38:54 compute-0 multipathd[331766]: + sudo kolla_copy_cacerts
Dec  3 01:38:54 compute-0 multipathd[331766]: Running command: '/usr/sbin/multipathd -d'
Dec  3 01:38:54 compute-0 multipathd[331766]: + [[ ! -n '' ]]
Dec  3 01:38:54 compute-0 multipathd[331766]: + . kolla_extend_start
Dec  3 01:38:54 compute-0 multipathd[331766]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  3 01:38:54 compute-0 multipathd[331766]: + umask 0022
Dec  3 01:38:54 compute-0 multipathd[331766]: + exec /usr/sbin/multipathd -d
Dec  3 01:38:54 compute-0 multipathd[331766]: 4774.516844 | --------start up--------
Dec  3 01:38:54 compute-0 multipathd[331766]: 4774.516878 | read /etc/multipath.conf
Dec  3 01:38:54 compute-0 podman[331783]: 2025-12-03 01:38:54.859643893 +0000 UTC m=+0.127504032 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  3 01:38:54 compute-0 multipathd[331766]: 4774.532488 | path checkers start up
Dec  3 01:38:54 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-3a5a14b6f5856a05.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 01:38:54 compute-0 systemd[1]: df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630-3a5a14b6f5856a05.service: Failed with result 'exit-code'.
Dec  3 01:38:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:55 compute-0 python3.9[331975]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:38:57 compute-0 python3.9[332127]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 01:38:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:38:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:58 compute-0 python3.9[332279]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  3 01:38:58 compute-0 kernel: Key type psk registered
Dec  3 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:38:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:38:59 compute-0 python3.9[332442]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:38:59.596 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:38:59.598 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:38:59.598 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:38:59 compute-0 podman[158098]: time="2025-12-03T01:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec  3 01:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7691 "" "Go-http-client/1.1"
Dec  3 01:39:00 compute-0 python3.9[332565]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764725938.749532-630-18200752187020/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:39:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:39:01 compute-0 openstack_network_exporter[160250]: ERROR   01:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:39:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:39:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:01 compute-0 podman[332689]: 2025-12-03 01:39:01.592823009 +0000 UTC m=+0.142396400 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 01:39:01 compute-0 python3.9[332734]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:03 compute-0 podman[332859]: 2025-12-03 01:39:03.690940821 +0000 UTC m=+0.116861304 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  3 01:39:04 compute-0 python3.9[332903]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:39:04 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  3 01:39:04 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec  3 01:39:04 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec  3 01:39:04 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec  3 01:39:04 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec  3 01:39:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:05 compute-0 python3.9[333059]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:39:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:07 compute-0 podman[333064]: 2025-12-03 01:39:07.864991687 +0000 UTC m=+0.119307662 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:39:08 compute-0 systemd[1]: Reloading.
Dec  3 01:39:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:39:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:39:08 compute-0 systemd[1]: Reloading.
Dec  3 01:39:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:39:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:39:09 compute-0 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  3 01:39:09 compute-0 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  3 01:39:09 compute-0 lvm[333197]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 01:39:09 compute-0 lvm[333197]: VG ceph_vg1 finished
Dec  3 01:39:09 compute-0 lvm[333198]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 01:39:09 compute-0 lvm[333198]: VG ceph_vg2 finished
Dec  3 01:39:09 compute-0 lvm[333202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 01:39:09 compute-0 lvm[333202]: VG ceph_vg0 finished
Dec  3 01:39:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 01:39:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec  3 01:39:09 compute-0 systemd[1]: Reloading.
Dec  3 01:39:09 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:39:09 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:39:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 01:39:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:11 compute-0 python3.9[334361]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:39:11 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec  3 01:39:11 compute-0 iscsid[321548]: iscsid shutting down.
Dec  3 01:39:11 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec  3 01:39:11 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec  3 01:39:11 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  3 01:39:11 compute-0 systemd[1]: Starting Open-iSCSI...
Dec  3 01:39:11 compute-0 systemd[1]: Started Open-iSCSI.
Dec  3 01:39:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 01:39:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec  3 01:39:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.607s CPU time.
Dec  3 01:39:11 compute-0 systemd[1]: run-r278b756dfbb642aea5910ce2f28e2a42.service: Deactivated successfully.
Dec  3 01:39:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:12 compute-0 python3.9[334696]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:39:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:14 compute-0 python3.9[334852]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:15 compute-0 python3.9[335006]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:39:15 compute-0 systemd[1]: Reloading.
Dec  3 01:39:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:39:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:39:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:17 compute-0 python3.9[335191]: ansible-ansible.builtin.service_facts Invoked
Dec  3 01:39:17 compute-0 network[335208]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 01:39:17 compute-0 network[335209]: 'network-scripts' will be removed from distribution in near future.
Dec  3 01:39:17 compute-0 network[335210]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 01:39:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:39:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a3f4c0b8-2f62-4018-b5c4-5475fbd349e0 does not exist
Dec  3 01:39:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a10b34f-030f-4c18-aedf-89e2d8a80df9 does not exist
Dec  3 01:39:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5a034a2d-b877-4b8e-a505-de0cf2ad74e2 does not exist
Dec  3 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:39:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:39:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:39:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:39:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:21 compute-0 podman[335583]: 2025-12-03 01:39:21.934956362 +0000 UTC m=+0.107580823 container create 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 01:39:21 compute-0 podman[335583]: 2025-12-03 01:39:21.885654987 +0000 UTC m=+0.058279518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:39:22 compute-0 systemd[1]: Started libpod-conmon-1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100.scope.
Dec  3 01:39:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:39:22 compute-0 podman[335583]: 2025-12-03 01:39:22.056971599 +0000 UTC m=+0.229596070 container init 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:39:22 compute-0 podman[335583]: 2025-12-03 01:39:22.074510041 +0000 UTC m=+0.247134492 container start 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 01:39:22 compute-0 podman[335583]: 2025-12-03 01:39:22.079207403 +0000 UTC m=+0.251831964 container attach 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:39:22 compute-0 determined_snyder[335604]: 167 167
Dec  3 01:39:22 compute-0 systemd[1]: libpod-1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100.scope: Deactivated successfully.
Dec  3 01:39:22 compute-0 podman[335612]: 2025-12-03 01:39:22.164691694 +0000 UTC m=+0.052346661 container died 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-aac5f71a4939bf973946eb3e56aa2320191d3b7ecc5574d5760ceee14cf976ae-merged.mount: Deactivated successfully.
Dec  3 01:39:22 compute-0 podman[335612]: 2025-12-03 01:39:22.22580972 +0000 UTC m=+0.113464687 container remove 1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:39:22 compute-0 systemd[1]: libpod-conmon-1cf2504e41042895edf2af18d65501351dda47c349c8d5666fd4b44eae9fe100.scope: Deactivated successfully.
Dec  3 01:39:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.538322138 +0000 UTC m=+0.099845556 container create e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.497286925 +0000 UTC m=+0.058810423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:39:22 compute-0 systemd[1]: Started libpod-conmon-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope.
Dec  3 01:39:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.719336272 +0000 UTC m=+0.280859670 container init e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 01:39:22 compute-0 podman[335680]: 2025-12-03 01:39:22.734689053 +0000 UTC m=+0.119298702 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.742488932 +0000 UTC m=+0.304012310 container start e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:39:22 compute-0 podman[335643]: 2025-12-03 01:39:22.748707217 +0000 UTC m=+0.310230615 container attach e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:39:22 compute-0 podman[335681]: 2025-12-03 01:39:22.756882126 +0000 UTC m=+0.143216453 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Dec  3 01:39:22 compute-0 podman[335684]: 2025-12-03 01:39:22.75913628 +0000 UTC m=+0.138853331 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 01:39:22 compute-0 podman[335685]: 2025-12-03 01:39:22.817404266 +0000 UTC m=+0.177601639 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 01:39:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:23 compute-0 python3.9[335905]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:23 compute-0 dreamy_poincare[335704]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:39:23 compute-0 dreamy_poincare[335704]: --> relative data size: 1.0
Dec  3 01:39:23 compute-0 dreamy_poincare[335704]: --> All data devices are unavailable
Dec  3 01:39:24 compute-0 systemd[1]: libpod-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope: Deactivated successfully.
Dec  3 01:39:24 compute-0 podman[335643]: 2025-12-03 01:39:24.002208775 +0000 UTC m=+1.563732173 container died e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:39:24 compute-0 systemd[1]: libpod-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope: Consumed 1.186s CPU time.
Dec  3 01:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc9ee0185f46dd0eb3390dfb017df79518e7595cb90b05fcf8639db5f86c4203-merged.mount: Deactivated successfully.
Dec  3 01:39:24 compute-0 podman[335643]: 2025-12-03 01:39:24.113590863 +0000 UTC m=+1.675114251 container remove e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:39:24 compute-0 systemd[1]: libpod-conmon-e0259b8f6d773507487d1746467cd66c1a69f0c888050bd9102da51b72ca44a0.scope: Deactivated successfully.
Dec  3 01:39:25 compute-0 python3.9[336183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:25 compute-0 podman[336218]: 2025-12-03 01:39:25.218575119 +0000 UTC m=+0.095172924 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.265335053 +0000 UTC m=+0.075518773 container create f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:39:25 compute-0 podman[336222]: 2025-12-03 01:39:25.270864238 +0000 UTC m=+0.141971709 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.23568913 +0000 UTC m=+0.045872930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:39:25 compute-0 systemd[1]: Started libpod-conmon-f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad.scope.
Dec  3 01:39:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.405281093 +0000 UTC m=+0.215464903 container init f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.421043346 +0000 UTC m=+0.231227096 container start f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.42794496 +0000 UTC m=+0.238128720 container attach f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:39:25 compute-0 upbeat_grothendieck[336303]: 167 167
Dec  3 01:39:25 compute-0 systemd[1]: libpod-f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad.scope: Deactivated successfully.
Dec  3 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.432694033 +0000 UTC m=+0.242877783 container died f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:39:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3defef656fe750f4b3d2162334d665f6e25f05ffa591d5df87d88d7ea960eb55-merged.mount: Deactivated successfully.
Dec  3 01:39:25 compute-0 podman[336241]: 2025-12-03 01:39:25.509102899 +0000 UTC m=+0.319286639 container remove f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:39:25 compute-0 systemd[1]: libpod-conmon-f5a5db2fdb39272ff8b5dc1ec1739f0b6c63dc7849fce21fb974a09c554a3dad.scope: Deactivated successfully.
Dec  3 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.76296063 +0000 UTC m=+0.068134785 container create 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:39:25 compute-0 systemd[1]: Started libpod-conmon-1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50.scope.
Dec  3 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.736229579 +0000 UTC m=+0.041403824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:39:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.912409826 +0000 UTC m=+0.217583981 container init 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.924806604 +0000 UTC m=+0.229980769 container start 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:39:25 compute-0 podman[336394]: 2025-12-03 01:39:25.929192298 +0000 UTC m=+0.234366463 container attach 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:39:26 compute-0 python3.9[336468]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]: {
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:    "0": [
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:        {
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "devices": [
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "/dev/loop3"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            ],
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_name": "ceph_lv0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_size": "21470642176",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "name": "ceph_lv0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "tags": {
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cluster_name": "ceph",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.crush_device_class": "",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.encrypted": "0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osd_id": "0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.type": "block",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.vdo": "0"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            },
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "type": "block",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "vg_name": "ceph_vg0"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:        }
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:    ],
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:    "1": [
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:        {
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "devices": [
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "/dev/loop4"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            ],
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_name": "ceph_lv1",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_size": "21470642176",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "name": "ceph_lv1",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "tags": {
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cluster_name": "ceph",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.crush_device_class": "",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.encrypted": "0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osd_id": "1",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.type": "block",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.vdo": "0"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            },
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "type": "block",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "vg_name": "ceph_vg1"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:        }
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:    ],
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:    "2": [
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:        {
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "devices": [
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "/dev/loop5"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            ],
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_name": "ceph_lv2",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_size": "21470642176",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "name": "ceph_lv2",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "tags": {
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.cluster_name": "ceph",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.crush_device_class": "",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.encrypted": "0",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osd_id": "2",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.type": "block",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:                "ceph.vdo": "0"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            },
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "type": "block",
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:            "vg_name": "ceph_vg2"
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:        }
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]:    ]
Dec  3 01:39:26 compute-0 pedantic_swanson[336433]: }
Dec  3 01:39:26 compute-0 systemd[1]: libpod-1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50.scope: Deactivated successfully.
Dec  3 01:39:26 compute-0 podman[336394]: 2025-12-03 01:39:26.847154721 +0000 UTC m=+1.152328916 container died 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:39:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb6753130cf9225d5ca89a697fcd91a58b283dff5a442242e876f2344a076b14-merged.mount: Deactivated successfully.
Dec  3 01:39:26 compute-0 podman[336394]: 2025-12-03 01:39:26.934924716 +0000 UTC m=+1.240098881 container remove 1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 01:39:26 compute-0 systemd[1]: libpod-conmon-1d245dce376d69ecfa44f02c47932117b20d81c9ac0eda176a3ea21451bf8f50.scope: Deactivated successfully.
Dec  3 01:39:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:27 compute-0 python3.9[336686]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.038912794 +0000 UTC m=+0.082805636 container create 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.004785316 +0000 UTC m=+0.048678188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:39:28 compute-0 systemd[1]: Started libpod-conmon-5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4.scope.
Dec  3 01:39:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.17975615 +0000 UTC m=+0.223648962 container init 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.197670333 +0000 UTC m=+0.241563125 container start 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.202267333 +0000 UTC m=+0.246160125 container attach 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:39:28 compute-0 heuristic_brattain[336872]: 167 167
Dec  3 01:39:28 compute-0 systemd[1]: libpod-5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4.scope: Deactivated successfully.
Dec  3 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.21144509 +0000 UTC m=+0.255337922 container died 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-55311c7bcc74168cc861d57dea7dcaa00155f614505dfc0cdc58a57c7884b687-merged.mount: Deactivated successfully.
Dec  3 01:39:28 compute-0 podman[336822]: 2025-12-03 01:39:28.282314691 +0000 UTC m=+0.326207493 container remove 5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:39:28 compute-0 systemd[1]: libpod-conmon-5e9efa52fec0d1945f420a299be9e18b75cd25ca4a95dc31bda1e772c68b31e4.scope: Deactivated successfully.
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:39:28
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.547692235 +0000 UTC m=+0.080408690 container create 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.521131219 +0000 UTC m=+0.053847654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:39:28 compute-0 systemd[1]: Started libpod-conmon-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope.
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:39:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:39:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.72910763 +0000 UTC m=+0.261824085 container init 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.750369447 +0000 UTC m=+0.283085902 container start 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:39:28 compute-0 podman[336956]: 2025-12-03 01:39:28.757622351 +0000 UTC m=+0.290338786 container attach 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:39:28 compute-0 python3.9[336982]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:29 compute-0 podman[158098]: time="2025-12-03T01:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 39893 "" "Go-http-client/1.1"
Dec  3 01:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8125 "" "Go-http-client/1.1"
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]: {
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "osd_id": 2,
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "type": "bluestore"
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:    },
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "osd_id": 1,
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "type": "bluestore"
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:    },
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "osd_id": 0,
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:        "type": "bluestore"
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]:    }
Dec  3 01:39:29 compute-0 quizzical_margulis[336985]: }
Dec  3 01:39:29 compute-0 systemd[1]: libpod-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope: Deactivated successfully.
Dec  3 01:39:29 compute-0 systemd[1]: libpod-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope: Consumed 1.176s CPU time.
Dec  3 01:39:29 compute-0 conmon[336985]: conmon 5fd8cc2c013d645e5861 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope/container/memory.events
Dec  3 01:39:30 compute-0 podman[337173]: 2025-12-03 01:39:30.003769291 +0000 UTC m=+0.056513758 container died 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:39:30 compute-0 python3.9[337164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd9cbdf7bebf9026a34d4611ad6094c98777d0fa5be91819e63b6e45abc8087f-merged.mount: Deactivated successfully.
Dec  3 01:39:30 compute-0 podman[337173]: 2025-12-03 01:39:30.124992106 +0000 UTC m=+0.177736523 container remove 5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:39:30 compute-0 systemd[1]: libpod-conmon-5fd8cc2c013d645e5861ae410cdb49e7e68e447974b7b5965ccb06d7bf47a9af.scope: Deactivated successfully.
Dec  3 01:39:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:39:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:39:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:39:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:39:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6cab5926-c891-48a5-96ef-63ce93f509e7 does not exist
Dec  3 01:39:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ff0d941f-fbb7-4121-8710-aea2c90d207e does not exist
Dec  3 01:39:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:39:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:39:31 compute-0 python3.9[337389]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:39:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:39:31 compute-0 openstack_network_exporter[160250]: ERROR   01:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:39:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:39:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:31 compute-0 podman[337453]: 2025-12-03 01:39:31.892925583 +0000 UTC m=+0.142985928 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, release-0.7.12=, container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, maintainer=Red Hat, Inc.)
Dec  3 01:39:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:32 compute-0 python3.9[337564]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:39:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:33 compute-0 podman[337607]: 2025-12-03 01:39:33.871203907 +0000 UTC m=+0.117236714 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:39:34 compute-0 python3.9[337736]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:36 compute-0 python3.9[337888]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:37 compute-0 python3.9[338040]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:39:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:39:38 compute-0 podman[338164]: 2025-12-03 01:39:38.359680643 +0000 UTC m=+0.146610584 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:39:38 compute-0 python3.9[338215]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:39 compute-0 python3.9[338367]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:40 compute-0 python3.9[338519]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.975 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.976 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.981 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.980 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.983 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.984 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.985 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.986 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.allocation': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.993 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.994 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.995 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.996 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.997 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.998 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:41 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:39:40.999 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:39:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:41 compute-0 python3.9[338672]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5690 writes, 23K keys, 5690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5690 writes, 885 syncs, 6.43 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd94ae11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 01:39:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:42 compute-0 python3.9[338824]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:43 compute-0 python3.9[338976]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:45 compute-0 python3.9[339128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:46 compute-0 python3.9[339280]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:48 compute-0 python3.9[339432]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6920 writes, 28K keys, 6920 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6920 writes, 1242 syncs, 5.57 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f0a3d5d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 01:39:49 compute-0 python3.9[339584]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:50 compute-0 python3.9[339737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:51 compute-0 python3.9[339891]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:52 compute-0 python3.9[340043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:39:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:53 compute-0 podman[340169]: 2025-12-03 01:39:53.715776943 +0000 UTC m=+0.113055528 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:39:53 compute-0 podman[340168]: 2025-12-03 01:39:53.716203095 +0000 UTC m=+0.122797128 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:39:53 compute-0 podman[340167]: 2025-12-03 01:39:53.731510163 +0000 UTC m=+0.145204146 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:39:53 compute-0 podman[340170]: 2025-12-03 01:39:53.782063932 +0000 UTC m=+0.176632284 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:39:53 compute-0 python3.9[340269]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:39:54 compute-0 auditd[706]: Audit daemon rotating log files
Dec  3 01:39:55 compute-0 python3.9[340428]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 01:39:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:55 compute-0 podman[340524]: 2025-12-03 01:39:55.883168073 +0000 UTC m=+0.129685332 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 01:39:55 compute-0 podman[340517]: 2025-12-03 01:39:55.888377572 +0000 UTC m=+0.133597056 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  3 01:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5709 writes, 24K keys, 5709 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5709 writes, 908 syncs, 6.29 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 178 writes, 270 keys, 178 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 178 writes, 88 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558b8220d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Dec  3 01:39:56 compute-0 python3.9[340616]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:39:56 compute-0 systemd[1]: Reloading.
Dec  3 01:39:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:39:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:39:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 01:39:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:39:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:58 compute-0 python3.9[340803]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:39:59 compute-0 python3.9[340956]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:39:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:39:59.598 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:39:59.599 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:39:59.600 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:39:59 compute-0 podman[158098]: time="2025-12-03T01:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38320 "" "Go-http-client/1.1"
Dec  3 01:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7697 "" "Go-http-client/1.1"
Dec  3 01:40:00 compute-0 python3.9[341109]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:40:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:40:01 compute-0 openstack_network_exporter[160250]: ERROR   01:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:40:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:40:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:02 compute-0 podman[341234]: 2025-12-03 01:40:02.321607782 +0000 UTC m=+0.177235661 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 01:40:02 compute-0 python3.9[341281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:40:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:03 compute-0 python3.9[341435]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:40:04 compute-0 podman[341560]: 2025-12-03 01:40:04.864182705 +0000 UTC m=+0.123133837 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 01:40:05 compute-0 python3.9[341605]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:40:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:06 compute-0 python3.9[341758]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:40:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:07 compute-0 python3.9[341911]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:40:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.587002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007587104, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 251, "total_data_size": 2504728, "memory_usage": 2534016, "flush_reason": "Manual Compaction"}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007614965, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2470997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14788, "largest_seqno": 16302, "table_properties": {"data_size": 2463837, "index_size": 4231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14130, "raw_average_key_size": 19, "raw_value_size": 2449715, "raw_average_value_size": 3397, "num_data_blocks": 193, "num_entries": 721, "num_filter_entries": 721, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764725837, "oldest_key_time": 1764725837, "file_creation_time": 1764726007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 28078 microseconds, and 11289 cpu microseconds.
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.615060) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2470997 bytes OK
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.615113) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.617696) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.617719) EVENT_LOG_v1 {"time_micros": 1764726007617711, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.617741) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2498144, prev total WAL file size 2498144, number of live WAL files 2.
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.619874) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2413KB)], [35(6887KB)]
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007620007, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9523452, "oldest_snapshot_seqno": -1}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3972 keys, 7767520 bytes, temperature: kUnknown
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007698734, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7767520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7738562, "index_size": 17904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97075, "raw_average_key_size": 24, "raw_value_size": 7664282, "raw_average_value_size": 1929, "num_data_blocks": 759, "num_entries": 3972, "num_filter_entries": 3972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.699137) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7767520 bytes
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.702494) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.7 rd, 98.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4486, records dropped: 514 output_compression: NoCompression
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.702591) EVENT_LOG_v1 {"time_micros": 1764726007702521, "job": 16, "event": "compaction_finished", "compaction_time_micros": 78877, "compaction_time_cpu_micros": 34685, "output_level": 6, "num_output_files": 1, "total_output_size": 7767520, "num_input_records": 4486, "num_output_records": 3972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007703643, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726007706407, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.619259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:40:07 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:40:07.706687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:40:08 compute-0 podman[341993]: 2025-12-03 01:40:08.869140483 +0000 UTC m=+0.123689352 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:40:09 compute-0 python3.9[342088]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:10 compute-0 python3.9[342240]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:11 compute-0 python3.9[342392]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:12 compute-0 python3.9[342544]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:13 compute-0 python3.9[342696]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:14 compute-0 python3.9[342848]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:15 compute-0 python3.9[343000]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:17 compute-0 python3.9[343152]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:18 compute-0 python3.9[343304]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:20 compute-0 python3.9[343457]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:24 compute-0 podman[343484]: 2025-12-03 01:40:24.878462455 +0000 UTC m=+0.110446789 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 01:40:24 compute-0 podman[343482]: 2025-12-03 01:40:24.883855149 +0000 UTC m=+0.127609007 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:40:24 compute-0 podman[343483]: 2025-12-03 01:40:24.914415044 +0000 UTC m=+0.151484544 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:40:24 compute-0 podman[343485]: 2025-12-03 01:40:24.930126443 +0000 UTC m=+0.153647731 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 01:40:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:26 compute-0 podman[343666]: 2025-12-03 01:40:26.163509418 +0000 UTC m=+0.119103739 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:40:26 compute-0 podman[343665]: 2025-12-03 01:40:26.189870271 +0000 UTC m=+0.151467413 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec  3 01:40:26 compute-0 python3.9[343726]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  3 01:40:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:27 compute-0 python3.9[343881]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:40:28
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:40:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:40:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:29 compute-0 podman[158098]: time="2025-12-03T01:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38320 "" "Go-http-client/1.1"
Dec  3 01:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7698 "" "Go-http-client/1.1"
Dec  3 01:40:29 compute-0 python3.9[344039]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 01:40:30 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:40:30 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:40:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:40:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:40:31 compute-0 openstack_network_exporter[160250]: ERROR   01:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:40:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:40:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:40:31 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:31 compute-0 systemd-logind[800]: New session 56 of user zuul.
Dec  3 01:40:31 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec  3 01:40:32 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec  3 01:40:32 compute-0 systemd-logind[800]: Session 56 logged out. Waiting for processes to exit.
Dec  3 01:40:32 compute-0 systemd-logind[800]: Removed session 56.
Dec  3 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:32 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f8f87bfc-128b-42ba-bd19-78b78239c278 does not exist
Dec  3 01:40:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1e49fe43-88dc-4d6e-b368-7ce85e37d9e9 does not exist
Dec  3 01:40:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 724da50a-f2b7-4e10-819b-0604a6891e5b does not exist
Dec  3 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:40:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:40:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:40:32 compute-0 podman[344439]: 2025-12-03 01:40:32.858175255 +0000 UTC m=+0.112111433 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  3 01:40:33 compute-0 python3.9[344566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:40:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.69208286 +0000 UTC m=+0.069652230 container create 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.659001247 +0000 UTC m=+0.036570707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:40:33 compute-0 systemd[1]: Started libpod-conmon-2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014.scope.
Dec  3 01:40:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.846224812 +0000 UTC m=+0.223794232 container init 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.863785931 +0000 UTC m=+0.241355311 container start 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.869064652 +0000 UTC m=+0.246634032 container attach 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:40:33 compute-0 wonderful_wilson[344765]: 167 167
Dec  3 01:40:33 compute-0 systemd[1]: libpod-2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014.scope: Deactivated successfully.
Dec  3 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.875434662 +0000 UTC m=+0.253004072 container died 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2db2a4eb2e266fb05469415630416c6b7cbc638f45fcb2aac6a4f400cc04c133-merged.mount: Deactivated successfully.
Dec  3 01:40:33 compute-0 podman[344721]: 2025-12-03 01:40:33.957497072 +0000 UTC m=+0.335066452 container remove 2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:40:33 compute-0 systemd[1]: libpod-conmon-2728c4d4ee49815e907e78b4671aec346428ce48f5aedd7370fdfcd72ae67014.scope: Deactivated successfully.
Dec  3 01:40:33 compute-0 python3.9[344769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726032.3762393-1249-189122045207061/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.249082573 +0000 UTC m=+0.089843898 container create 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.199245433 +0000 UTC m=+0.040006758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:40:34 compute-0 systemd[1]: Started libpod-conmon-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope.
Dec  3 01:40:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.400142164 +0000 UTC m=+0.240903539 container init 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.435090827 +0000 UTC m=+0.275852152 container start 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:40:34 compute-0 podman[344816]: 2025-12-03 01:40:34.442003052 +0000 UTC m=+0.282764377 container attach 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 01:40:35 compute-0 python3.9[344962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:35 compute-0 podman[344963]: 2025-12-03 01:40:35.213935262 +0000 UTC m=+0.126971029 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 01:40:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:35 compute-0 peaceful_borg[344867]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:40:35 compute-0 peaceful_borg[344867]: --> relative data size: 1.0
Dec  3 01:40:35 compute-0 peaceful_borg[344867]: --> All data devices are unavailable
Dec  3 01:40:35 compute-0 python3.9[345073]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:35 compute-0 systemd[1]: libpod-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope: Deactivated successfully.
Dec  3 01:40:35 compute-0 systemd[1]: libpod-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope: Consumed 1.185s CPU time.
Dec  3 01:40:35 compute-0 podman[344816]: 2025-12-03 01:40:35.685792544 +0000 UTC m=+1.526553839 container died 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a54f2db8a96b25f672c3de34cddc04a0c45aafb3b4b3fe54f7ab6e8b4bd970b0-merged.mount: Deactivated successfully.
Dec  3 01:40:35 compute-0 podman[344816]: 2025-12-03 01:40:35.78643009 +0000 UTC m=+1.627191395 container remove 56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:40:35 compute-0 systemd[1]: libpod-conmon-56b437c9a792d02fcd778aa6c90d3ea13d54b9b81b4a95e0a1d2884a20104975.scope: Deactivated successfully.
Dec  3 01:40:36 compute-0 python3.9[345343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:36 compute-0 podman[345382]: 2025-12-03 01:40:36.981733959 +0000 UTC m=+0.076685608 container create 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 01:40:37 compute-0 systemd[1]: Started libpod-conmon-3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036.scope.
Dec  3 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:36.954239195 +0000 UTC m=+0.049190874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:40:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.104645319 +0000 UTC m=+0.199596978 container init 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.115670843 +0000 UTC m=+0.210622492 container start 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.121288583 +0000 UTC m=+0.216240262 container attach 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:40:37 compute-0 focused_shannon[345442]: 167 167
Dec  3 01:40:37 compute-0 systemd[1]: libpod-3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036.scope: Deactivated successfully.
Dec  3 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.125170407 +0000 UTC m=+0.220122056 container died 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:40:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2cff0ac7720074036f5a4b8f3987e4756b57885ad47793aeadd17227f5936c0-merged.mount: Deactivated successfully.
Dec  3 01:40:37 compute-0 podman[345382]: 2025-12-03 01:40:37.183159654 +0000 UTC m=+0.278111313 container remove 3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:40:37 compute-0 systemd[1]: libpod-conmon-3861582df168a4023bb79945aeed92a6fb7185739d1bacfa482736934235e036.scope: Deactivated successfully.
Dec  3 01:40:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.429017365 +0000 UTC m=+0.080150830 container create 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:40:37 compute-0 systemd[1]: Started libpod-conmon-0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb.scope.
Dec  3 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.402351213 +0000 UTC m=+0.053484728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:40:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.574099366 +0000 UTC m=+0.225232901 container init 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.592691252 +0000 UTC m=+0.243824717 container start 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:40:37 compute-0 podman[345513]: 2025-12-03 01:40:37.597703926 +0000 UTC m=+0.248837421 container attach 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:40:37 compute-0 python3.9[345553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726035.9707463-1249-85723918476647/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:40:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]: {
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:    "0": [
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:        {
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "devices": [
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "/dev/loop3"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            ],
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_name": "ceph_lv0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_size": "21470642176",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "name": "ceph_lv0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "tags": {
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cluster_name": "ceph",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.crush_device_class": "",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.encrypted": "0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osd_id": "0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.type": "block",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.vdo": "0"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            },
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "type": "block",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "vg_name": "ceph_vg0"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:        }
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:    ],
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:    "1": [
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:        {
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "devices": [
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "/dev/loop4"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            ],
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_name": "ceph_lv1",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_size": "21470642176",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "name": "ceph_lv1",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "tags": {
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cluster_name": "ceph",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.crush_device_class": "",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.encrypted": "0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osd_id": "1",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.type": "block",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.vdo": "0"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            },
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "type": "block",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "vg_name": "ceph_vg1"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:        }
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:    ],
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:    "2": [
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:        {
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "devices": [
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "/dev/loop5"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            ],
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_name": "ceph_lv2",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_size": "21470642176",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "name": "ceph_lv2",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "tags": {
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.cluster_name": "ceph",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.crush_device_class": "",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.encrypted": "0",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osd_id": "2",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.type": "block",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:                "ceph.vdo": "0"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            },
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "type": "block",
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:            "vg_name": "ceph_vg2"
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:        }
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]:    ]
Dec  3 01:40:38 compute-0 awesome_archimedes[345556]: }
Dec  3 01:40:38 compute-0 systemd[1]: libpod-0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb.scope: Deactivated successfully.
Dec  3 01:40:38 compute-0 podman[345513]: 2025-12-03 01:40:38.460041359 +0000 UTC m=+1.111174864 container died 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:40:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef8e328e1ab498748a41bf297377b2a1105c6c7d5e433a55de4b84864bf69eb9-merged.mount: Deactivated successfully.
Dec  3 01:40:38 compute-0 podman[345513]: 2025-12-03 01:40:38.551904041 +0000 UTC m=+1.203037496 container remove 0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 01:40:38 compute-0 systemd[1]: libpod-conmon-0587a4992db8c1a974f3a6b0976248f3a00132f0cbf3c52514bc672c754e45fb.scope: Deactivated successfully.
Dec  3 01:40:38 compute-0 python3.9[345725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:39 compute-0 podman[345846]: 2025-12-03 01:40:39.109627635 +0000 UTC m=+0.107846629 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:40:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:39 compute-0 python3.9[345992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726037.9508862-1249-193219835405454/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.705429285 +0000 UTC m=+0.070737039 container create 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.674033327 +0000 UTC m=+0.039341061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:40:39 compute-0 systemd[1]: Started libpod-conmon-55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850.scope.
Dec  3 01:40:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.926910935 +0000 UTC m=+0.292218739 container init 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.947071443 +0000 UTC m=+0.312379197 container start 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.954175103 +0000 UTC m=+0.319482907 container attach 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:40:39 compute-0 dazzling_perlman[346052]: 167 167
Dec  3 01:40:39 compute-0 systemd[1]: libpod-55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850.scope: Deactivated successfully.
Dec  3 01:40:39 compute-0 podman[346012]: 2025-12-03 01:40:39.962996568 +0000 UTC m=+0.328304372 container died 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:40:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-24af5ec410e89d059685d8da8128ece4b44e910064e5130efefd52fc2c063752-merged.mount: Deactivated successfully.
Dec  3 01:40:40 compute-0 podman[346012]: 2025-12-03 01:40:40.035818872 +0000 UTC m=+0.401126586 container remove 55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_perlman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:40:40 compute-0 systemd[1]: libpod-conmon-55e84f3f2404af4ca8d83c86acd9c0023830f3ffbb7ee0231bb6b1dec6633850.scope: Deactivated successfully.
Dec  3 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.310365659 +0000 UTC m=+0.083082259 container create aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.272648272 +0000 UTC m=+0.045364852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:40:40 compute-0 systemd[1]: Started libpod-conmon-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope.
Dec  3 01:40:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.499461835 +0000 UTC m=+0.272178485 container init aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.534194052 +0000 UTC m=+0.306910622 container start aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:40:40 compute-0 podman[346145]: 2025-12-03 01:40:40.5423863 +0000 UTC m=+0.315102900 container attach aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:40:40 compute-0 python3.9[346220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:41 compute-0 python3.9[346347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726039.994734-1249-3237874373654/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]: {
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "osd_id": 2,
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "type": "bluestore"
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:    },
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "osd_id": 1,
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "type": "bluestore"
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:    },
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "osd_id": 0,
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:        "type": "bluestore"
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]:    }
Dec  3 01:40:41 compute-0 quirky_driscoll[346190]: }
Dec  3 01:40:41 compute-0 systemd[1]: libpod-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope: Deactivated successfully.
Dec  3 01:40:41 compute-0 podman[346145]: 2025-12-03 01:40:41.753984393 +0000 UTC m=+1.526700963 container died aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:40:41 compute-0 systemd[1]: libpod-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope: Consumed 1.217s CPU time.
Dec  3 01:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f96cd473c19fa2f37995d332b62a7b654639c625d226ef41440046c53abebaa-merged.mount: Deactivated successfully.
Dec  3 01:40:41 compute-0 podman[346145]: 2025-12-03 01:40:41.860088915 +0000 UTC m=+1.632805495 container remove aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:40:41 compute-0 systemd[1]: libpod-conmon-aec79770080fc634e555690c2393cd358ee32efd9892285a278749d0330ac904.scope: Deactivated successfully.
Dec  3 01:40:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:40:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:40:41 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:41 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 912c5aae-7047-4b29-9ac2-204f05d5954a does not exist
Dec  3 01:40:41 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fdadf596-8977-44db-a8cb-9360df5cb871 does not exist
Dec  3 01:40:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:40:43 compute-0 python3.9[346583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:43 compute-0 python3.9[346704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726041.878425-1249-216442937457595/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:46 compute-0 python3.9[346856]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:40:47 compute-0 python3.9[347008]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:40:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:48 compute-0 python3.9[347164]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:40:49 compute-0 python3.9[347316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:50 compute-0 python3.9[347440]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764726048.5458724-1356-207141803458916/.source _original_basename=.5j8wav2u follow=False checksum=477c62050a358b588929eb0b410757a54c9a1308 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  3 01:40:51 compute-0 python3.9[347592]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:40:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:52 compute-0 python3.9[347744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:53 compute-0 python3.9[347865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726051.7291126-1382-197171699205897/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:55 compute-0 podman[347992]: 2025-12-03 01:40:55.26367623 +0000 UTC m=+0.116876079 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64)
Dec  3 01:40:55 compute-0 podman[347991]: 2025-12-03 01:40:55.29778209 +0000 UTC m=+0.164046438 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:40:55 compute-0 podman[347993]: 2025-12-03 01:40:55.298142439 +0000 UTC m=+0.143311834 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 01:40:55 compute-0 podman[347999]: 2025-12-03 01:40:55.300249586 +0000 UTC m=+0.151624267 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Dec  3 01:40:55 compute-0 python3.9[348066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:40:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:56 compute-0 podman[348194]: 2025-12-03 01:40:56.776857491 +0000 UTC m=+0.130418641 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:40:56 compute-0 podman[348195]: 2025-12-03 01:40:56.824518413 +0000 UTC m=+0.172203806 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  3 01:40:56 compute-0 python3.9[348251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764726054.6556053-1397-267399815361325/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:40:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:40:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:58 compute-0 python3.9[348410]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  3 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:40:59 compute-0 python3.9[348562]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 01:40:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:40:59.600 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:40:59.600 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:40:59.601 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:40:59 compute-0 podman[158098]: time="2025-12-03T01:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38320 "" "Go-http-client/1.1"
Dec  3 01:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7707 "" "Go-http-client/1.1"
Dec  3 01:41:00 compute-0 python3[348714]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:41:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:41:01 compute-0 openstack_network_exporter[160250]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:41:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:41:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:03 compute-0 podman[348748]: 2025-12-03 01:41:03.806343363 +0000 UTC m=+0.068275133 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 01:41:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:10 compute-0 podman[348782]: 2025-12-03 01:41:10.196956446 +0000 UTC m=+4.464873752 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 01:41:10 compute-0 podman[348793]: 2025-12-03 01:41:10.216361934 +0000 UTC m=+0.476922999 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:41:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:17 compute-0 podman[348725]: 2025-12-03 01:41:17.528402397 +0000 UTC m=+16.512997475 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 01:41:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:17 compute-0 podman[348858]: 2025-12-03 01:41:17.82793351 +0000 UTC m=+0.109319128 container create 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:41:17 compute-0 podman[348858]: 2025-12-03 01:41:17.77322814 +0000 UTC m=+0.054613828 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 01:41:17 compute-0 python3[348714]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  3 01:41:19 compute-0 python3.9[349046]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:41:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Dec  3 01:41:21 compute-0 python3.9[349201]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  3 01:41:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Dec  3 01:41:23 compute-0 python3.9[349353]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 01:41:25 compute-0 python3[349505]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 01:41:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec  3 01:41:25 compute-0 podman[349538]: 2025-12-03 01:41:25.544016086 +0000 UTC m=+0.105872046 container create 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Dec  3 01:41:25 compute-0 podman[349538]: 2025-12-03 01:41:25.492100381 +0000 UTC m=+0.053956391 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 01:41:25 compute-0 python3[349505]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  3 01:41:25 compute-0 podman[349573]: 2025-12-03 01:41:25.876866039 +0000 UTC m=+0.122357976 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible)
Dec  3 01:41:25 compute-0 podman[349572]: 2025-12-03 01:41:25.876977582 +0000 UTC m=+0.121489493 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:41:25 compute-0 podman[349574]: 2025-12-03 01:41:25.906288294 +0000 UTC m=+0.137929572 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:41:25 compute-0 podman[349575]: 2025-12-03 01:41:25.951711746 +0000 UTC m=+0.179733997 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec  3 01:41:26 compute-0 python3.9[349806]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:41:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:41:27 compute-0 podman[349911]: 2025-12-03 01:41:27.873512082 +0000 UTC m=+0.125577982 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:41:27 compute-0 podman[349915]: 2025-12-03 01:41:27.876589954 +0000 UTC m=+0.127082892 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  3 01:41:28 compute-0 python3.9[349997]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:41:28
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'backups', 'vms', '.rgw.root', 'images', 'volumes', 'default.rgw.log']
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:41:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:41:29 compute-0 python3.9[350148]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726088.2548232-1489-106830894291005/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:41:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:41:29 compute-0 podman[158098]: time="2025-12-03T01:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42584 "" "Go-http-client/1.1"
Dec  3 01:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7694 "" "Go-http-client/1.1"
Dec  3 01:41:30 compute-0 python3.9[350224]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:41:30 compute-0 systemd[1]: Reloading.
Dec  3 01:41:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:41:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:41:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:41:31 compute-0 openstack_network_exporter[160250]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:41:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:41:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:41:31 compute-0 python3.9[350337]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:41:31 compute-0 systemd[1]: Reloading.
Dec  3 01:41:31 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:41:31 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:41:32 compute-0 systemd[1]: Starting nova_compute container...
Dec  3 01:41:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:32 compute-0 podman[350377]: 2025-12-03 01:41:32.491382687 +0000 UTC m=+0.236209655 container init 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Dec  3 01:41:32 compute-0 podman[350377]: 2025-12-03 01:41:32.515315186 +0000 UTC m=+0.260142094 container start 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec  3 01:41:32 compute-0 podman[350377]: nova_compute
Dec  3 01:41:32 compute-0 nova_compute[350390]: + sudo -E kolla_set_configs
Dec  3 01:41:32 compute-0 systemd[1]: Started nova_compute container.
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Validating config file
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying service configuration files
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Deleting /etc/ceph
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Creating directory /etc/ceph
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Writing out command to execute
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:32 compute-0 nova_compute[350390]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 01:41:32 compute-0 nova_compute[350390]: ++ cat /run_command
Dec  3 01:41:32 compute-0 nova_compute[350390]: + CMD=nova-compute
Dec  3 01:41:32 compute-0 nova_compute[350390]: + ARGS=
Dec  3 01:41:32 compute-0 nova_compute[350390]: + sudo kolla_copy_cacerts
Dec  3 01:41:32 compute-0 nova_compute[350390]: + [[ ! -n '' ]]
Dec  3 01:41:32 compute-0 nova_compute[350390]: + . kolla_extend_start
Dec  3 01:41:32 compute-0 nova_compute[350390]: + echo 'Running command: '\''nova-compute'\'''
Dec  3 01:41:32 compute-0 nova_compute[350390]: Running command: 'nova-compute'
Dec  3 01:41:32 compute-0 nova_compute[350390]: + umask 0022
Dec  3 01:41:32 compute-0 nova_compute[350390]: + exec nova-compute
Dec  3 01:41:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec  3 01:41:34 compute-0 podman[350528]: 2025-12-03 01:41:34.68212771 +0000 UTC m=+0.177357873 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=)
Dec  3 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.796 350396 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  3 01:41:34 compute-0 python3.9[350564]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.932 350396 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.958 350396 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:41:34 compute-0 nova_compute[350390]: 2025-12-03 01:41:34.958 350396 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  3 01:41:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.595 350396 INFO nova.virt.driver [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.738 350396 INFO nova.compute.provider_config [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.759 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.759 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.759 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.760 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.760 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.760 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.761 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.762 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.763 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.764 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.765 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.766 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.767 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.768 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.769 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.770 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.771 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.772 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.773 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.774 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.775 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.776 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.777 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.778 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.779 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.780 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.781 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.782 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.783 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.784 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.785 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.786 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.787 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.788 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.789 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.790 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.791 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.792 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.793 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.794 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.795 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.796 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.797 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.798 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.799 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.800 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.801 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.802 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.803 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.804 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.805 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.806 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.807 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.808 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.809 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.810 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.811 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.812 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.813 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.814 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.815 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.816 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.817 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.818 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.819 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.820 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.821 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.822 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.823 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.824 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.825 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.826 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.827 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.828 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.829 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.830 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.831 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.832 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.833 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.834 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.835 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.836 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.837 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.838 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.839 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.840 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.841 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.842 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 WARNING oslo_config.cfg [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  3 01:41:35 compute-0 nova_compute[350390]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  3 01:41:35 compute-0 nova_compute[350390]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  3 01:41:35 compute-0 nova_compute[350390]: and ``live_migration_inbound_addr`` respectively.
Dec  3 01:41:35 compute-0 nova_compute[350390]: ).  Its value may be silently ignored in the future.#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.843 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.844 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.845 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_secret_uuid        = 3765feb2-36f8-5b86-b74c-64e9221f9c4c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.846 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.847 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.848 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.849 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.850 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.851 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.852 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.853 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.854 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.855 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.856 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.857 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.858 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.859 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.860 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.861 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.862 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.863 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.864 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.865 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.866 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.867 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.868 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.869 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.870 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.871 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.872 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.873 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.874 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.875 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.876 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.877 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.878 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.879 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.880 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.881 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.882 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.883 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.884 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.885 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.886 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.887 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.888 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.889 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.890 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.891 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.892 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.893 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.894 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.895 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.896 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.897 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.898 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.899 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.900 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.901 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.902 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.903 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.904 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.905 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.906 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.907 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.908 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.909 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.910 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.911 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.912 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.913 350396 DEBUG oslo_service.service [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.914 350396 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.936 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.937 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.937 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.938 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.956 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0e7169ce50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.962 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0e7169ce50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.963 350396 INFO nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.995 350396 WARNING nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  3 01:41:35 compute-0 nova_compute[350390]: 2025-12-03 01:41:35.995 350396 DEBUG nova.virt.libvirt.volume.mount [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  3 01:41:36 compute-0 python3.9[350760]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.167 350396 INFO nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host capabilities <capabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]: 
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <host>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <uuid>bb85f21b-9f67-464f-8fbe-e50d4e1e7eb4</uuid>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <arch>x86_64</arch>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model>EPYC-Rome-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <vendor>AMD</vendor>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <microcode version='16777317'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <signature family='23' model='49' stepping='0'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='x2apic'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='tsc-deadline'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='osxsave'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='hypervisor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='tsc_adjust'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='spec-ctrl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='stibp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='arch-capabilities'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='cmp_legacy'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='topoext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='virt-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='lbrv'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='tsc-scale'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='vmcb-clean'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='pause-filter'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='pfthreshold'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='svme-addr-chk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='rdctl-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='skip-l1dfl-vmentry'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='mds-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature name='pschange-mc-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <pages unit='KiB' size='4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <pages unit='KiB' size='2048'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <pages unit='KiB' size='1048576'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <power_management>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <suspend_mem/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </power_management>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <iommu support='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <migration_features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <live/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <uri_transports>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <uri_transport>tcp</uri_transport>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <uri_transport>rdma</uri_transport>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </uri_transports>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </migration_features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <topology>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <cells num='1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <cell id='0'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          <memory unit='KiB'>7864312</memory>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          <pages unit='KiB' size='4'>1966078</pages>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          <pages unit='KiB' size='2048'>0</pages>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          <distances>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <sibling id='0' value='10'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          </distances>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          <cpus num='8'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:          </cpus>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        </cell>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </cells>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </topology>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <cache>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </cache>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <secmodel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model>selinux</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <doi>0</doi>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </secmodel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <secmodel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model>dac</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <doi>0</doi>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </secmodel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </host>
Dec  3 01:41:37 compute-0 nova_compute[350390]: 
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <guest>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <os_type>hvm</os_type>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <arch name='i686'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <wordsize>32</wordsize>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <domain type='qemu'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <domain type='kvm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </arch>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <pae/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <nonpae/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <acpi default='on' toggle='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <apic default='on' toggle='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <cpuselection/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <deviceboot/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <disksnapshot default='on' toggle='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <externalSnapshot/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </guest>
Dec  3 01:41:37 compute-0 nova_compute[350390]: 
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <guest>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <os_type>hvm</os_type>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <arch name='x86_64'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <wordsize>64</wordsize>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <domain type='qemu'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <domain type='kvm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </arch>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <acpi default='on' toggle='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <apic default='on' toggle='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <cpuselection/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <deviceboot/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <disksnapshot default='on' toggle='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <externalSnapshot/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </guest>
Dec  3 01:41:37 compute-0 nova_compute[350390]: 
Dec  3 01:41:37 compute-0 nova_compute[350390]: </capabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]: #033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.179 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.213 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  3 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <domain>kvm</domain>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <arch>i686</arch>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <vcpu max='4096'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <iothreads supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <os supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='firmware'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <loader supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>rom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pflash</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='readonly'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>yes</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='secure'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </loader>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </os>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='maximumMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <vendor>AMD</vendor>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='succor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='custom' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-128'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-256'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-512'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <memoryBacking supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='sourceType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>anonymous</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>memfd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </memoryBacking>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <disk supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='diskDevice'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>disk</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cdrom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>floppy</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>lun</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>fdc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>sata</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </disk>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <graphics supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vnc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egl-headless</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </graphics>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <video supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='modelType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vga</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cirrus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>none</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>bochs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ramfb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </video>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hostdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='mode'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>subsystem</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='startupPolicy'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>mandatory</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>requisite</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>optional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='subsysType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pci</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='capsType'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='pciBackend'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hostdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <rng supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>random</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </rng>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <filesystem supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='driverType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>path</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>handle</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtiofs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </filesystem>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <tpm supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-tis</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-crb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emulator</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>external</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendVersion'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>2.0</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </tpm>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <redirdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </redirdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <channel supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </channel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <crypto supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </crypto>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <interface supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>passt</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </interface>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <panic supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>isa</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>hyperv</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </panic>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <console supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>null</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dev</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pipe</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stdio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>udp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tcp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu-vdagent</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </console>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <gic supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <genid supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backup supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <async-teardown supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <ps2 supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sev supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sgx supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hyperv supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='features'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>relaxed</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vapic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>spinlocks</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vpindex</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>runtime</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>synic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stimer</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reset</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vendor_id</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>frequencies</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reenlightenment</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tlbflush</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ipi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>avic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emsr_bitmap</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>xmm_input</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hyperv>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <launchSecurity supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='sectype'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tdx</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </launchSecurity>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </features>
Dec  3 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.223 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  3 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <domain>kvm</domain>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <arch>i686</arch>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <vcpu max='240'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <iothreads supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <os supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='firmware'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <loader supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>rom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pflash</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='readonly'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>yes</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='secure'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </loader>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </os>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='maximumMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <vendor>AMD</vendor>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='succor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='custom' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-128'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-256'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-512'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <memoryBacking supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='sourceType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>anonymous</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>memfd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </memoryBacking>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <disk supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='diskDevice'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>disk</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cdrom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>floppy</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>lun</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ide</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>fdc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>sata</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </disk>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <graphics supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vnc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egl-headless</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </graphics>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <video supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='modelType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vga</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cirrus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>none</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>bochs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ramfb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </video>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hostdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='mode'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>subsystem</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='startupPolicy'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>mandatory</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>requisite</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>optional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='subsysType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pci</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='capsType'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='pciBackend'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hostdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <rng supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>random</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </rng>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <filesystem supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='driverType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>path</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>handle</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtiofs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </filesystem>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <tpm supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-tis</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-crb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emulator</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>external</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendVersion'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>2.0</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </tpm>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <redirdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </redirdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <channel supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </channel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <crypto supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </crypto>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <interface supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>passt</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </interface>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <panic supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>isa</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>hyperv</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </panic>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <console supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>null</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dev</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pipe</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stdio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>udp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tcp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu-vdagent</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </console>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <gic supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <genid supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backup supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <async-teardown supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <ps2 supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sev supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sgx supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hyperv supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='features'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>relaxed</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vapic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>spinlocks</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vpindex</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>runtime</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>synic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stimer</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reset</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vendor_id</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>frequencies</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reenlightenment</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tlbflush</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ipi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>avic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emsr_bitmap</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>xmm_input</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hyperv>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <launchSecurity supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='sectype'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tdx</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </launchSecurity>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </features>
Dec  3 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.302 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.309 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  3 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <domain>kvm</domain>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <arch>x86_64</arch>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <vcpu max='4096'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <iothreads supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <os supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='firmware'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>efi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <loader supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>rom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pflash</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='readonly'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>yes</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='secure'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>yes</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </loader>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </os>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='maximumMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <vendor>AMD</vendor>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='succor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='custom' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-128'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-256'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-512'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <memoryBacking supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='sourceType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>anonymous</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>memfd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </memoryBacking>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <disk supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='diskDevice'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>disk</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cdrom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>floppy</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>lun</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>fdc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>sata</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </disk>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <graphics supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vnc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egl-headless</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </graphics>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <video supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='modelType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vga</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cirrus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>none</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>bochs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ramfb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </video>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hostdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='mode'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>subsystem</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='startupPolicy'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>mandatory</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>requisite</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>optional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='subsysType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pci</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='capsType'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='pciBackend'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hostdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <rng supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>random</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </rng>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <filesystem supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='driverType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>path</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>handle</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtiofs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </filesystem>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <tpm supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-tis</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-crb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emulator</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>external</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendVersion'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>2.0</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </tpm>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <redirdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </redirdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <channel supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </channel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <crypto supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </crypto>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <interface supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>passt</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </interface>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <panic supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>isa</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>hyperv</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </panic>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <console supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>null</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dev</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pipe</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stdio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>udp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tcp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu-vdagent</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </console>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <gic supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <genid supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backup supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <async-teardown supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <ps2 supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sev supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sgx supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hyperv supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='features'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>relaxed</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vapic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>spinlocks</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vpindex</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>runtime</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>synic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stimer</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reset</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vendor_id</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>frequencies</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reenlightenment</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tlbflush</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ipi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>avic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emsr_bitmap</value>
Dec  3 01:41:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>xmm_input</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hyperv>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <launchSecurity supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='sectype'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tdx</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </launchSecurity>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </features>
Dec  3 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.411 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  3 01:41:37 compute-0 nova_compute[350390]: <domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <domain>kvm</domain>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <arch>x86_64</arch>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <vcpu max='240'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <iothreads supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <os supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='firmware'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <loader supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>rom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pflash</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='readonly'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>yes</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='secure'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>no</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </loader>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </os>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='maximumMigratable'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>on</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>off</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <vendor>AMD</vendor>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='succor'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <mode name='custom' supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Denverton-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='auto-ibrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amd-psfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='stibp-always-on'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='EPYC-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-128'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-256'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx10-512'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='prefetchiti'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Haswell-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512er'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512pf'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fma4'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tbm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xop'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='amx-tile'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-bf16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-fp16'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bitalg'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrc'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fzrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='la57'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='taa-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xfd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ifma'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cmpccxadd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fbsdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='fsrs'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ibrs-all'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mcdt-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pbrsb-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='psdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='serialize'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vaes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='hle'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='rtm'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512bw'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512cd'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512dq'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512f'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='avx512vl'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='invpcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pcid'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='pku'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='mpx'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='core-capability'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='split-lock-detect'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='cldemote'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='erms'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='gfni'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdir64b'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='movdiri'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='xsaves'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='athlon-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='core2duo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='coreduo-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='n270-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='ss'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <blockers model='phenom-v1'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnow'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <feature name='3dnowext'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </blockers>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </mode>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </cpu>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <memoryBacking supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <enum name='sourceType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>anonymous</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <value>memfd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </memoryBacking>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <disk supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='diskDevice'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>disk</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cdrom</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>floppy</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>lun</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ide</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>fdc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>sata</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </disk>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <graphics supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vnc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egl-headless</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </graphics>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <video supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='modelType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vga</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>cirrus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>none</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>bochs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ramfb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </video>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hostdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='mode'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>subsystem</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='startupPolicy'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>mandatory</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>requisite</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>optional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='subsysType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pci</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>scsi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='capsType'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='pciBackend'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hostdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <rng supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtio-non-transitional</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>random</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>egd</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </rng>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <filesystem supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='driverType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>path</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>handle</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>virtiofs</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </filesystem>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <tpm supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-tis</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tpm-crb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emulator</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>external</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendVersion'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>2.0</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </tpm>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <redirdev supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='bus'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>usb</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </redirdev>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <channel supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </channel>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <crypto supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendModel'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>builtin</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </crypto>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <interface supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='backendType'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>default</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>passt</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </interface>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <panic supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='model'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>isa</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>hyperv</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </panic>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <console supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='type'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>null</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vc</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pty</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dev</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>file</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>pipe</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stdio</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>udp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tcp</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>unix</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>qemu-vdagent</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>dbus</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </console>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </devices>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  <features>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <gic supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <genid supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <backup supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <async-teardown supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <ps2 supported='yes'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sev supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <sgx supported='no'/>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <hyperv supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='features'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>relaxed</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vapic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>spinlocks</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vpindex</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>runtime</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>synic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>stimer</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reset</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>vendor_id</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>frequencies</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>reenlightenment</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tlbflush</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>ipi</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>avic</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>emsr_bitmap</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>xmm_input</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </defaults>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </hyperv>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    <launchSecurity supported='yes'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      <enum name='sectype'>
Dec  3 01:41:37 compute-0 nova_compute[350390]:        <value>tdx</value>
Dec  3 01:41:37 compute-0 nova_compute[350390]:      </enum>
Dec  3 01:41:37 compute-0 nova_compute[350390]:    </launchSecurity>
Dec  3 01:41:37 compute-0 nova_compute[350390]:  </features>
Dec  3 01:41:37 compute-0 nova_compute[350390]: </domainCapabilities>
Dec  3 01:41:37 compute-0 nova_compute[350390]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.541 350396 DEBUG nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.541 350396 INFO nova.virt.libvirt.host [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Secure Boot support detected#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.545 350396 INFO nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.545 350396 INFO nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.568 350396 DEBUG nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.622 350396 INFO nova.virt.node [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Determined node identity 107397d2-51bc-4a03-bce4-7cd69319cf05 from /var/lib/nova/compute_id#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.658 350396 WARNING nova.compute.manager [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Compute nodes ['107397d2-51bc-4a03-bce4-7cd69319cf05'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.715 350396 INFO nova.compute.manager [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  3 01:41:37 compute-0 python3.9[350922]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 WARNING nova.compute.manager [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.774 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:41:37 compute-0 nova_compute[350390]: 2025-12-03 01:41:37.775 350396 DEBUG oslo_concurrency.processutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:41:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:41:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082600722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.249 350396 DEBUG oslo_concurrency.processutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:41:38 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec  3 01:41:38 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.845 350396 WARNING nova.virt.libvirt.driver [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.847 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4576MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.847 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.847 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.866 350396 WARNING nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] No compute node record for compute-0.ctlplane.example.com:107397d2-51bc-4a03-bce4-7cd69319cf05: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 107397d2-51bc-4a03-bce4-7cd69319cf05 could not be found.#033[00m
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.896 350396 INFO nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 107397d2-51bc-4a03-bce4-7cd69319cf05#033[00m
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.957 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:41:38 compute-0 nova_compute[350390]: 2025-12-03 01:41:38.958 350396 DEBUG nova.compute.resource_tracker [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:41:38 compute-0 python3.9[351119]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  3 01:41:39 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:41:39 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:41:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.069 350396 INFO nova.scheduler.client.report [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] [req-e196463b-40da-40c9-9372-3ef0ada7f326] Created resource provider record via placement API for resource provider with UUID 107397d2-51bc-4a03-bce4-7cd69319cf05 and name compute-0.ctlplane.example.com.#033[00m
Dec  3 01:41:40 compute-0 python3.9[351291]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.485 350396 DEBUG oslo_concurrency.processutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:41:40 compute-0 systemd[1]: Stopping nova_compute container...
Dec  3 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.679 350396 DEBUG oslo_concurrency.lockutils [None req-5db8ed89-4cdc-4a80-a659-ef4e6c144923 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.680 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.681 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:41:40 compute-0 nova_compute[350390]: 2025-12-03 01:41:40.681 350396 DEBUG oslo_concurrency.lockutils [None req-2d063d70-4173-41ac-98c3-8b71d9e03a22 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.975 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.976 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f00ebd496a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eda45910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00eabec2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4be90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebcadee0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.979 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f00ebd4bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f00ea6a7710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f00ebd4b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f00edba6090>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f00ebd4bb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f00ebd4b140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f00ebd4b1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.984 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f00ebd4b200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f00ebd4b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f00eabec290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f00ebd4b2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f00ebd4b320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.986 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f00ebd4b380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f00ebd4bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f00ebd4b3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.987 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f00ebd4bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f00ebd4bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f00ebd4bce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f00ebe0e030>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f00ebd4bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.989 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f00ebd4b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f00ede91a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f00ebd4be60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f00ebd4b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f00ede92450>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f00ebd4bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f00ebd4bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f00ebd0f9e0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.992 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.993 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:40 compute-0 ceilometer_agent_compute[154605]: 2025-12-03 01:41:40.994 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:41:41 compute-0 virtqemud[154511]: End of file while reading data: Input/output error
Dec  3 01:41:41 compute-0 systemd[1]: libpod-1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101.scope: Deactivated successfully.
Dec  3 01:41:41 compute-0 systemd[1]: libpod-1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101.scope: Consumed 4.116s CPU time.
Dec  3 01:41:41 compute-0 podman[351296]: 2025-12-03 01:41:41.074400688 +0000 UTC m=+0.517533283 container died 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Dec  3 01:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101-userdata-shm.mount: Deactivated successfully.
Dec  3 01:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef-merged.mount: Deactivated successfully.
Dec  3 01:41:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:42 compute-0 podman[351296]: 2025-12-03 01:41:42.823926985 +0000 UTC m=+2.267059570 container cleanup 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Dec  3 01:41:42 compute-0 podman[351296]: nova_compute
Dec  3 01:41:42 compute-0 podman[351447]: nova_compute
Dec  3 01:41:42 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  3 01:41:42 compute-0 systemd[1]: Stopped nova_compute container.
Dec  3 01:41:42 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.351s CPU time, 20.6M memory peak, read 0B from disk, written 105.5K to disk.
Dec  3 01:41:42 compute-0 systemd[1]: Starting nova_compute container...
Dec  3 01:41:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b68e35a6c38835a07ca7b432662818307ea714e030c65d1dee2979c23a2baef/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:43 compute-0 podman[351467]: 2025-12-03 01:41:43.135681647 +0000 UTC m=+0.158969099 container init 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 01:41:43 compute-0 podman[351467]: 2025-12-03 01:41:43.147632742 +0000 UTC m=+0.170920154 container start 1889b1738f438ee313befe0f02ea00cb2618a8f557e17b1fe752d5f1aa7d3101 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 01:41:43 compute-0 podman[351467]: nova_compute
Dec  3 01:41:43 compute-0 nova_compute[351485]: + sudo -E kolla_set_configs
Dec  3 01:41:43 compute-0 systemd[1]: Started nova_compute container.
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Validating config file
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying service configuration files
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /etc/ceph
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Creating directory /etc/ceph
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Writing out command to execute
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:43 compute-0 nova_compute[351485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 01:41:43 compute-0 nova_compute[351485]: ++ cat /run_command
Dec  3 01:41:43 compute-0 nova_compute[351485]: + CMD=nova-compute
Dec  3 01:41:43 compute-0 nova_compute[351485]: + ARGS=
Dec  3 01:41:43 compute-0 nova_compute[351485]: + sudo kolla_copy_cacerts
Dec  3 01:41:43 compute-0 nova_compute[351485]: + [[ ! -n '' ]]
Dec  3 01:41:43 compute-0 nova_compute[351485]: + . kolla_extend_start
Dec  3 01:41:43 compute-0 nova_compute[351485]: Running command: 'nova-compute'
Dec  3 01:41:43 compute-0 nova_compute[351485]: + echo 'Running command: '\''nova-compute'\'''
Dec  3 01:41:43 compute-0 nova_compute[351485]: + umask 0022
Dec  3 01:41:43 compute-0 nova_compute[351485]: + exec nova-compute
Dec  3 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:41:43 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a1177377-94bf-4365-8239-3dc3fb9f8f4b does not exist
Dec  3 01:41:43 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 27cd01a6-fc67-44b1-8d4e-87843be3568c does not exist
Dec  3 01:41:43 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a5ea1430-cf1e-4e45-86ce-0ab80dfe2682 does not exist
Dec  3 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:41:43 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:41:44 compute-0 python3.9[351767]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  3 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.512398392 +0000 UTC m=+0.096957039 container create 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.474577456 +0000 UTC m=+0.059136163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:41:44 compute-0 systemd[1]: Started libpod-conmon-799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6.scope.
Dec  3 01:41:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:44 compute-0 systemd[1]: Started libpod-conmon-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa.scope.
Dec  3 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.653098571 +0000 UTC m=+0.237657218 container init 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:41:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.665839976 +0000 UTC m=+0.250398603 container start 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.671802313 +0000 UTC m=+0.256360950 container attach 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:44 compute-0 loving_nobel[351854]: 167 167
Dec  3 01:41:44 compute-0 systemd[1]: libpod-799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6.scope: Deactivated successfully.
Dec  3 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.700669679 +0000 UTC m=+0.285228326 container died 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:41:44 compute-0 podman[351840]: 2025-12-03 01:41:44.72433281 +0000 UTC m=+0.188332100 container init 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 01:41:44 compute-0 podman[351840]: 2025-12-03 01:41:44.734261887 +0000 UTC m=+0.198261167 container start 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7df7ecf9a2e03bafad96681792113b94c7d3e556ebd0565973dc8a720fef7725-merged.mount: Deactivated successfully.
Dec  3 01:41:44 compute-0 python3.9[351767]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  3 01:41:44 compute-0 podman[351810]: 2025-12-03 01:41:44.76838802 +0000 UTC m=+0.352946647 container remove 799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:41:44 compute-0 systemd[1]: libpod-conmon-799f826c91cffadfcc25c12439bebb75427780922aa84b57bfac7a640d1c2ac6.scope: Deactivated successfully.
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Applying nova statedir ownership
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  3 01:41:44 compute-0 nova_compute_init[351880]: INFO:nova_statedir:Nova statedir ownership complete
Dec  3 01:41:44 compute-0 systemd[1]: libpod-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa.scope: Deactivated successfully.
Dec  3 01:41:44 compute-0 podman[351882]: 2025-12-03 01:41:44.819984501 +0000 UTC m=+0.043499346 container died 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Dec  3 01:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa-userdata-shm.mount: Deactivated successfully.
Dec  3 01:41:44 compute-0 podman[351893]: 2025-12-03 01:41:44.885221243 +0000 UTC m=+0.069848022 container cleanup 540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  3 01:41:44 compute-0 systemd[1]: libpod-conmon-540e2e9404e81677d7621395e04fb189d09872932cfad9cabeac5fc917d6fffa.scope: Deactivated successfully.
Dec  3 01:41:44 compute-0 podman[351922]: 2025-12-03 01:41:44.971165463 +0000 UTC m=+0.053473675 container create 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:41:45 compute-0 systemd[1]: Started libpod-conmon-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope.
Dec  3 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:44.94850407 +0000 UTC m=+0.030812322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:41:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:45.113290371 +0000 UTC m=+0.195598603 container init 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:45.132617081 +0000 UTC m=+0.214925293 container start 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:41:45 compute-0 podman[351922]: 2025-12-03 01:41:45.137863138 +0000 UTC m=+0.220171390 container attach 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.376 351492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.376 351492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.376 351492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.377 351492 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  3 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.537 351492 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e55ba46e4fbc6debe88a6407308101f83be09f1b64c2b54a9dd9544d84ba9b34-merged.mount: Deactivated successfully.
Dec  3 01:41:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.564 351492 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:41:45 compute-0 nova_compute[351485]: 2025-12-03 01:41:45.564 351492 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  3 01:41:45 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec  3 01:41:45 compute-0 systemd[1]: session-55.scope: Consumed 4min 13.490s CPU time.
Dec  3 01:41:45 compute-0 systemd-logind[800]: Session 55 logged out. Waiting for processes to exit.
Dec  3 01:41:45 compute-0 systemd-logind[800]: Removed session 55.
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.164 351492 INFO nova.virt.driver [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.294 351492 INFO nova.compute.provider_config [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.314 351492 DEBUG oslo_concurrency.lockutils [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.314 351492 DEBUG oslo_concurrency.lockutils [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.314 351492 DEBUG oslo_concurrency.lockutils [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.315 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.315 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.315 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.316 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.317 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.318 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.319 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.320 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.321 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.322 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.323 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.324 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.325 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.326 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.327 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.328 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.329 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.330 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.331 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.332 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.333 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.334 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.335 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.336 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.337 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.338 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.339 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.340 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.341 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.342 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.343 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.344 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.345 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.346 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.347 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.348 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.349 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.350 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.351 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.352 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.353 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.354 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.355 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 laughing_clarke[351955]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:41:46 compute-0 laughing_clarke[351955]: --> relative data size: 1.0
Dec  3 01:41:46 compute-0 laughing_clarke[351955]: --> All data devices are unavailable
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.356 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.357 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.358 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.359 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.360 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.361 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.362 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.363 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.364 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.365 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.366 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.367 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.368 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.369 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.370 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.371 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.372 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.373 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.374 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.375 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.376 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.377 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.378 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.379 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.380 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.381 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.382 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.383 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.384 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.385 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.386 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.387 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.388 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.389 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.390 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.391 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.392 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.393 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 WARNING oslo_config.cfg [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  3 01:41:46 compute-0 nova_compute[351485]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  3 01:41:46 compute-0 nova_compute[351485]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  3 01:41:46 compute-0 nova_compute[351485]: and ``live_migration_inbound_addr`` respectively.
Dec  3 01:41:46 compute-0 nova_compute[351485]: ).  Its value may be silently ignored in the future.#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.394 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.395 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.396 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_secret_uuid        = 3765feb2-36f8-5b86-b74c-64e9221f9c4c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.397 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.398 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.399 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.400 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.401 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.402 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 systemd[1]: libpod-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope: Deactivated successfully.
Dec  3 01:41:46 compute-0 systemd[1]: libpod-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope: Consumed 1.181s CPU time.
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.405 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.405 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.405 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.406 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 podman[351922]: 2025-12-03 01:41:46.406992457 +0000 UTC m=+1.489300709 container died 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.407 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.408 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.409 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.410 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.411 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.412 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.413 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.414 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.415 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.416 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.417 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.418 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.419 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.420 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.421 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.422 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.423 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.424 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.425 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.426 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.427 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.428 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.429 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.430 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.431 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.432 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.433 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.434 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.435 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.436 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.437 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.438 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.439 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.440 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.441 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.442 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.443 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.443 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.443 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.444 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.445 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.446 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.447 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.448 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.449 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.450 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.451 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.452 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.453 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.454 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.455 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.456 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.457 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.458 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.459 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.460 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.461 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.462 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.463 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.464 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.465 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.466 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.467 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-31bd18cbc263736a345d44abc52428637b8d3024d095c5580e6dcb99ca032084-merged.mount: Deactivated successfully.
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.468 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.469 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.469 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.469 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.470 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.470 351492 DEBUG oslo_service.service [None req-8ca419df-1554-4dd2-90bf-d4f5e42900a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.471 351492 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.496 351492 INFO nova.virt.node [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Determined node identity 107397d2-51bc-4a03-bce4-7cd69319cf05 from /var/lib/nova/compute_id#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.496 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.497 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.497 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.497 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  3 01:41:46 compute-0 podman[351922]: 2025-12-03 01:41:46.513856551 +0000 UTC m=+1.596164773 container remove 49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_clarke, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.517 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fee0837a580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  3 01:41:46 compute-0 systemd[1]: libpod-conmon-49a614d12b60a009c34d7d67e12af2911da0a097287374c60321e7e2e56fab9a.scope: Deactivated successfully.
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.537 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fee0837a580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.539 351492 INFO nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.547 351492 INFO nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host capabilities <capabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]: 
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <host>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <uuid>bb85f21b-9f67-464f-8fbe-e50d4e1e7eb4</uuid>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <arch>x86_64</arch>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model>EPYC-Rome-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <vendor>AMD</vendor>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <microcode version='16777317'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <signature family='23' model='49' stepping='0'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='x2apic'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='tsc-deadline'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='osxsave'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='hypervisor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='tsc_adjust'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='spec-ctrl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='stibp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='arch-capabilities'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='cmp_legacy'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='topoext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='virt-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='lbrv'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='tsc-scale'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='vmcb-clean'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='pause-filter'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='pfthreshold'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='svme-addr-chk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='rdctl-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='skip-l1dfl-vmentry'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='mds-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature name='pschange-mc-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <pages unit='KiB' size='4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <pages unit='KiB' size='2048'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <pages unit='KiB' size='1048576'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <power_management>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <suspend_mem/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </power_management>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <iommu support='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <migration_features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <live/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <uri_transports>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <uri_transport>tcp</uri_transport>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <uri_transport>rdma</uri_transport>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </uri_transports>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </migration_features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <topology>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <cells num='1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <cell id='0'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          <memory unit='KiB'>7864312</memory>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          <pages unit='KiB' size='4'>1966078</pages>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          <pages unit='KiB' size='2048'>0</pages>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          <distances>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <sibling id='0' value='10'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          </distances>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          <cpus num='8'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:          </cpus>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        </cell>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </cells>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </topology>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <cache>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </cache>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <secmodel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model>selinux</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <doi>0</doi>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </secmodel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <secmodel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model>dac</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <doi>0</doi>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </secmodel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </host>
Dec  3 01:41:46 compute-0 nova_compute[351485]: 
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <guest>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <os_type>hvm</os_type>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <arch name='i686'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <wordsize>32</wordsize>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <domain type='qemu'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <domain type='kvm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </arch>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <pae/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <nonpae/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <acpi default='on' toggle='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <apic default='on' toggle='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <cpuselection/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <deviceboot/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <disksnapshot default='on' toggle='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <externalSnapshot/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </guest>
Dec  3 01:41:46 compute-0 nova_compute[351485]: 
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <guest>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <os_type>hvm</os_type>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <arch name='x86_64'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <wordsize>64</wordsize>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <domain type='qemu'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <domain type='kvm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </arch>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <acpi default='on' toggle='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <apic default='on' toggle='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <cpuselection/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <deviceboot/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <disksnapshot default='on' toggle='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <externalSnapshot/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </guest>
Dec  3 01:41:46 compute-0 nova_compute[351485]: 
Dec  3 01:41:46 compute-0 nova_compute[351485]: </capabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]: #033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.555 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.556 351492 DEBUG nova.virt.libvirt.volume.mount [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.562 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  3 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <domain>kvm</domain>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <arch>i686</arch>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <vcpu max='240'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <iothreads supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <os supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <enum name='firmware'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <loader supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>rom</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pflash</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='readonly'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>yes</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='secure'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </loader>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </os>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='maximumMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <vendor>AMD</vendor>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='succor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='custom' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-128'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-256'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-512'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='athlon'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='athlon-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='core2duo'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='core2duo-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='coreduo'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='coreduo-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='n270'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='n270-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='phenom'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='phenom-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <memoryBacking supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <enum name='sourceType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>file</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>anonymous</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>memfd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </memoryBacking>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <devices>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <disk supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='diskDevice'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>disk</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>cdrom</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>floppy</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>lun</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ide</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>fdc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>sata</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <graphics supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vnc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>egl-headless</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </graphics>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <video supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='modelType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vga</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>cirrus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>none</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>bochs</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ramfb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </video>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <hostdev supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='mode'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>subsystem</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='startupPolicy'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>mandatory</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>requisite</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>optional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='subsysType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pci</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='capsType'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='pciBackend'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </hostdev>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <rng supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>random</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>egd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </rng>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <filesystem supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='driverType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>path</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>handle</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtiofs</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </filesystem>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <tpm supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tpm-tis</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tpm-crb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>emulator</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>external</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendVersion'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>2.0</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </tpm>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <redirdev supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </redirdev>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <channel supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </channel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <crypto supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>qemu</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </crypto>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <interface supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>passt</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </interface>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <panic supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>isa</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>hyperv</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </panic>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <console supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>null</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dev</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>file</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pipe</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>stdio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>udp</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tcp</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>qemu-vdagent</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </console>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </devices>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <gic supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <genid supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <backup supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <async-teardown supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <ps2 supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <sev supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <sgx supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <hyperv supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='features'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>relaxed</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vapic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>spinlocks</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vpindex</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>runtime</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>synic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>stimer</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>reset</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vendor_id</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>frequencies</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>reenlightenment</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tlbflush</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ipi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>avic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>emsr_bitmap</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>xmm_input</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <defaults>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </defaults>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </hyperv>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <launchSecurity supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='sectype'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tdx</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </launchSecurity>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </features>
Dec  3 01:41:46 compute-0 nova_compute[351485]: </domainCapabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.569 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  3 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <domain>kvm</domain>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <arch>i686</arch>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <vcpu max='4096'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <iothreads supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <os supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <enum name='firmware'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <loader supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>rom</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pflash</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='readonly'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>yes</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='secure'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </loader>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </os>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='maximumMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <vendor>AMD</vendor>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='succor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='custom' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-128'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-256'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-512'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='athlon'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='athlon-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='core2duo'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='core2duo-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='coreduo'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='coreduo-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='n270'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='n270-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='phenom'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='phenom-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <memoryBacking supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <enum name='sourceType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>file</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>anonymous</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>memfd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </memoryBacking>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <devices>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <disk supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='diskDevice'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>disk</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>cdrom</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>floppy</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>lun</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>fdc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>sata</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <graphics supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vnc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>egl-headless</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </graphics>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <video supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='modelType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vga</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>cirrus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>none</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>bochs</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ramfb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </video>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <hostdev supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='mode'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>subsystem</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='startupPolicy'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>mandatory</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>requisite</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>optional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='subsysType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pci</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='capsType'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='pciBackend'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </hostdev>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <rng supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>random</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>egd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </rng>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <filesystem supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='driverType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>path</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>handle</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtiofs</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </filesystem>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <tpm supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tpm-tis</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tpm-crb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>emulator</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>external</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendVersion'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>2.0</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </tpm>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <redirdev supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </redirdev>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <channel supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </channel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <crypto supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>qemu</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </crypto>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <interface supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>passt</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </interface>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <panic supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>isa</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>hyperv</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </panic>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <console supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>null</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dev</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>file</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pipe</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>stdio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>udp</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tcp</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>qemu-vdagent</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </console>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </devices>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <gic supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <genid supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <backup supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <async-teardown supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <ps2 supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <sev supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <sgx supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <hyperv supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='features'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>relaxed</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vapic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>spinlocks</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vpindex</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>runtime</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>synic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>stimer</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>reset</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vendor_id</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>frequencies</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>reenlightenment</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tlbflush</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ipi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>avic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>emsr_bitmap</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>xmm_input</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <defaults>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </defaults>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </hyperv>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <launchSecurity supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='sectype'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tdx</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </launchSecurity>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </features>
Dec  3 01:41:46 compute-0 nova_compute[351485]: </domainCapabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.621 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.626 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  3 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <domain>kvm</domain>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <arch>x86_64</arch>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <vcpu max='240'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <iothreads supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <os supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <enum name='firmware'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <loader supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>rom</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pflash</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='readonly'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>yes</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='secure'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </loader>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </os>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='maximumMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <vendor>AMD</vendor>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='succor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='custom' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-128'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-256'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-512'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='athlon'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='athlon-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='core2duo'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='core2duo-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='coreduo'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='coreduo-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='n270'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='n270-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='phenom'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='phenom-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <memoryBacking supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <enum name='sourceType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>file</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>anonymous</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>memfd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </memoryBacking>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <devices>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <disk supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='diskDevice'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>disk</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>cdrom</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>floppy</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>lun</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ide</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>fdc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>sata</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <graphics supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vnc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>egl-headless</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </graphics>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <video supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='modelType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vga</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>cirrus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>none</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>bochs</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ramfb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </video>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <hostdev supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='mode'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>subsystem</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='startupPolicy'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>mandatory</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>requisite</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>optional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='subsysType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pci</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='capsType'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='pciBackend'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </hostdev>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <rng supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>random</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>egd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </rng>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <filesystem supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='driverType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>path</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>handle</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>virtiofs</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </filesystem>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <tpm supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tpm-tis</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tpm-crb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>emulator</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>external</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendVersion'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>2.0</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </tpm>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <redirdev supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </redirdev>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <channel supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </channel>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <crypto supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>qemu</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </crypto>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <interface supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='backendType'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>passt</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </interface>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <panic supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>isa</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>hyperv</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </panic>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <console supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>null</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vc</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dev</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>file</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pipe</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>stdio</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>udp</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tcp</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>qemu-vdagent</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </console>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </devices>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <features>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <gic supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <genid supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <backup supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <async-teardown supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <ps2 supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <sev supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <sgx supported='no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <hyperv supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='features'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>relaxed</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vapic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>spinlocks</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vpindex</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>runtime</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>synic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>stimer</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>reset</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>vendor_id</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>frequencies</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>reenlightenment</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tlbflush</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>ipi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>avic</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>emsr_bitmap</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>xmm_input</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <defaults>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </defaults>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </hyperv>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <launchSecurity supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='sectype'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>tdx</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </launchSecurity>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </features>
Dec  3 01:41:46 compute-0 nova_compute[351485]: </domainCapabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:46 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.742 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  3 01:41:46 compute-0 nova_compute[351485]: <domainCapabilities>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <domain>kvm</domain>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <arch>x86_64</arch>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <vcpu max='4096'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <iothreads supported='yes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <os supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <enum name='firmware'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>efi</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <loader supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>rom</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>pflash</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='readonly'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>yes</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='secure'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>yes</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>no</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </loader>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  </os>
Dec  3 01:41:46 compute-0 nova_compute[351485]:  <cpu>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-passthrough' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='hostPassthroughMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='maximum' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <enum name='maximumMigratable'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>on</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <value>off</value>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='host-model' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <vendor>AMD</vendor>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='x2apic'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='hypervisor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='stibp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='overflow-recov'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='succor'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lbrv'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='tsc-scale'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='flushbyasid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pause-filter'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='pfthreshold'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <feature policy='disable' name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:46 compute-0 nova_compute[351485]:    <mode name='custom' supported='yes'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Broadwell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Cooperlake-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Denverton-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Dhyana-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='auto-ibrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Milan-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amd-psfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='no-nested-data-bp'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='null-sel-clr-base'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='stibp-always-on'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-Rome-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='EPYC-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='GraniteRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-128'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-256'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx10-512'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='prefetchiti'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Haswell-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v6'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Icelake-Server-v7'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='IvyBridge-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='KnightsMill-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4fmaps'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-4vnniw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512er'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512pf'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G4-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Opteron_G5-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fma4'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tbm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xop'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SapphireRapids-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='amx-tile'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-bf16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-fp16'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512-vpopcntdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bitalg'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vbmi2'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrc'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fzrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='la57'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='taa-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='tsx-ldtrk'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xfd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='SierraForest-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ifma'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-ne-convert'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx-vnni-int8'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='bus-lock-detect'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='cmpccxadd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fbsdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='fsrs'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='ibrs-all'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='mcdt-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pbrsb-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='psdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='sbdr-ssdp-no'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='serialize'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vaes'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='vpclmulqdq'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v1'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v2'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v3'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Client-v4'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 01:41:46 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server'>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:46 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v1'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v2'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='hle'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='rtm'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v3'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v4'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Skylake-Server-v5'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512bw'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512cd'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512dq'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512f'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='avx512vl'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='invpcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pcid'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='pku'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Snowridge'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v1'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='mpx'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v2'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v3'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='core-capability'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='split-lock-detect'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='Snowridge-v4'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='cldemote'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='erms'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='gfni'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdir64b'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='movdiri'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='xsaves'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='athlon'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='athlon-v1'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='core2duo'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='core2duo-v1'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='coreduo'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='coreduo-v1'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='n270'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='n270-v1'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='ss'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='phenom'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <blockers model='phenom-v1'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnow'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <feature name='3dnowext'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </blockers>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </mode>
Dec  3 01:41:47 compute-0 nova_compute[351485]:  </cpu>
Dec  3 01:41:47 compute-0 nova_compute[351485]:  <memoryBacking supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <enum name='sourceType'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <value>file</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <value>anonymous</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <value>memfd</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:  </memoryBacking>
Dec  3 01:41:47 compute-0 nova_compute[351485]:  <devices>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <disk supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='diskDevice'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>disk</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>cdrom</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>floppy</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>lun</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>fdc</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>sata</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <graphics supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>vnc</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>egl-headless</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </graphics>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <video supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='modelType'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>vga</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>cirrus</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>none</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>bochs</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>ramfb</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </video>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <hostdev supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='mode'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>subsystem</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='startupPolicy'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>mandatory</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>requisite</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>optional</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='subsysType'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>pci</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>scsi</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='capsType'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='pciBackend'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </hostdev>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <rng supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio-transitional</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtio-non-transitional</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>random</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>egd</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </rng>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <filesystem supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='driverType'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>path</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>handle</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>virtiofs</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </filesystem>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <tpm supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>tpm-tis</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>tpm-crb</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>emulator</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>external</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='backendVersion'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>2.0</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </tpm>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <redirdev supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='bus'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>usb</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </redirdev>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <channel supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </channel>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <crypto supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='model'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>qemu</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='backendModel'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>builtin</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </crypto>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <interface supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='backendType'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>default</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>passt</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </interface>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <panic supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='model'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>isa</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>hyperv</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </panic>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <console supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='type'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>null</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>vc</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>pty</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>dev</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>file</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>pipe</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>stdio</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>udp</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>tcp</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>unix</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>qemu-vdagent</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>dbus</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </console>
Dec  3 01:41:47 compute-0 nova_compute[351485]:  </devices>
Dec  3 01:41:47 compute-0 nova_compute[351485]:  <features>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <gic supported='no'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <vmcoreinfo supported='yes'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <genid supported='yes'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <backingStoreInput supported='yes'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <backup supported='yes'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <async-teardown supported='yes'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <ps2 supported='yes'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <sev supported='no'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <sgx supported='no'/>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <hyperv supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='features'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>relaxed</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>vapic</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>spinlocks</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>vpindex</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>runtime</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>synic</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>stimer</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>reset</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>vendor_id</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>frequencies</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>reenlightenment</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>tlbflush</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>ipi</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>avic</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>emsr_bitmap</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>xmm_input</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <defaults>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <spinlocks>4095</spinlocks>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <stimer_direct>on</stimer_direct>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </defaults>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </hyperv>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    <launchSecurity supported='yes'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      <enum name='sectype'>
Dec  3 01:41:47 compute-0 nova_compute[351485]:        <value>tdx</value>
Dec  3 01:41:47 compute-0 nova_compute[351485]:      </enum>
Dec  3 01:41:47 compute-0 nova_compute[351485]:    </launchSecurity>
Dec  3 01:41:47 compute-0 nova_compute[351485]:  </features>
Dec  3 01:41:47 compute-0 nova_compute[351485]: </domainCapabilities>
Dec  3 01:41:47 compute-0 nova_compute[351485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.859 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.860 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.860 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.861 351492 INFO nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Secure Boot support detected#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.866 351492 INFO nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.866 351492 INFO nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.890 351492 DEBUG nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.918 351492 INFO nova.virt.node [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Determined node identity 107397d2-51bc-4a03-bce4-7cd69319cf05 from /var/lib/nova/compute_id#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.943 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Verified node 107397d2-51bc-4a03-bce4-7cd69319cf05 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:46.972 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.077 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.078 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.079 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.079 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.080 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.459916569 +0000 UTC m=+0.066659242 container create aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 01:41:47 compute-0 systemd[1]: Started libpod-conmon-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope.
Dec  3 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.433593074 +0000 UTC m=+0.040335767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:41:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.550956381 +0000 UTC m=+0.157699084 container init aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:41:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.561772313 +0000 UTC m=+0.168515006 container start aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:41:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907357913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:41:47 compute-0 unruffled_hugle[352209]: 167 167
Dec  3 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.568576463 +0000 UTC m=+0.175319156 container attach aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:41:47 compute-0 conmon[352209]: conmon aa4a055fd92c25a1d7cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope/container/memory.events
Dec  3 01:41:47 compute-0 systemd[1]: libpod-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope: Deactivated successfully.
Dec  3 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.570629311 +0000 UTC m=+0.177372004 container died aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:47 compute-0 nova_compute[351485]: 2025-12-03 01:41:47.598 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:41:47 compute-0 podman[352208]: 2025-12-03 01:41:47.600569727 +0000 UTC m=+0.094591793 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f7ab8a4a709445e8bc0d7aedcfb2c18cdf835d4085bb74fa3636f461159bc55-merged.mount: Deactivated successfully.
Dec  3 01:41:47 compute-0 podman[352205]: 2025-12-03 01:41:47.617033627 +0000 UTC m=+0.109817608 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 01:41:47 compute-0 podman[352194]: 2025-12-03 01:41:47.622622003 +0000 UTC m=+0.229364676 container remove aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:41:47 compute-0 systemd[1]: libpod-conmon-aa4a055fd92c25a1d7cd35312a58d05565832784b874277c6994eede20a0548c.scope: Deactivated successfully.
Dec  3 01:41:47 compute-0 podman[352274]: 2025-12-03 01:41:47.859193449 +0000 UTC m=+0.062951939 container create f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:41:47 compute-0 systemd[1]: Started libpod-conmon-f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626.scope.
Dec  3 01:41:47 compute-0 podman[352274]: 2025-12-03 01:41:47.835652791 +0000 UTC m=+0.039411301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:41:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:47 compute-0 podman[352274]: 2025-12-03 01:41:47.984605031 +0000 UTC m=+0.188363521 container init f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:41:48 compute-0 podman[352274]: 2025-12-03 01:41:48.006017269 +0000 UTC m=+0.209775759 container start f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:41:48 compute-0 podman[352274]: 2025-12-03 01:41:48.011096681 +0000 UTC m=+0.214855181 container attach f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.029 351492 WARNING nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.030 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4537MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.031 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.031 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.229 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.230 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.356 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.387 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.387 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.412 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.430 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.459 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:41:48 compute-0 trusting_banzai[352290]: {
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:    "0": [
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:        {
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "devices": [
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "/dev/loop3"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            ],
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_name": "ceph_lv0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_size": "21470642176",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "name": "ceph_lv0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "tags": {
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cluster_name": "ceph",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.crush_device_class": "",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.encrypted": "0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osd_id": "0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.type": "block",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.vdo": "0"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            },
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "type": "block",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "vg_name": "ceph_vg0"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:        }
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:    ],
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:    "1": [
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:        {
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "devices": [
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "/dev/loop4"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            ],
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_name": "ceph_lv1",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_size": "21470642176",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "name": "ceph_lv1",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "tags": {
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cluster_name": "ceph",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.crush_device_class": "",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.encrypted": "0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osd_id": "1",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.type": "block",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.vdo": "0"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            },
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "type": "block",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "vg_name": "ceph_vg1"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:        }
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:    ],
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:    "2": [
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:        {
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "devices": [
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "/dev/loop5"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            ],
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_name": "ceph_lv2",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_size": "21470642176",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "name": "ceph_lv2",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "tags": {
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.cluster_name": "ceph",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.crush_device_class": "",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.encrypted": "0",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osd_id": "2",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.type": "block",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:                "ceph.vdo": "0"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            },
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "type": "block",
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:            "vg_name": "ceph_vg2"
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:        }
Dec  3 01:41:48 compute-0 trusting_banzai[352290]:    ]
Dec  3 01:41:48 compute-0 trusting_banzai[352290]: }
Dec  3 01:41:48 compute-0 systemd[1]: libpod-f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626.scope: Deactivated successfully.
Dec  3 01:41:48 compute-0 podman[352319]: 2025-12-03 01:41:48.908914302 +0000 UTC m=+0.049578796 container died f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:41:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:41:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938662527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:41:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b90857e7bba2ff2f001e51fb0a7d412b16ce463db628c36ea7ebc826c748edd0-merged.mount: Deactivated successfully.
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.973 351492 DEBUG oslo_concurrency.processutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.985 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  3 01:41:48 compute-0 nova_compute[351485]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.985 351492 INFO nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.987 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:41:48 compute-0 nova_compute[351485]: 2025-12-03 01:41:48.988 351492 DEBUG nova.virt.libvirt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 01:41:48 compute-0 podman[352319]: 2025-12-03 01:41:48.998559914 +0000 UTC m=+0.139224378 container remove f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:41:49 compute-0 systemd[1]: libpod-conmon-f706e69fc823b5c80aef84ec6e94241438a2fc4956aaafde0d6227bda8bb8626.scope: Deactivated successfully.
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.056 351492 DEBUG nova.scheduler.client.report [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updated inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.056 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.056 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.148 351492 DEBUG nova.compute.provider_tree [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Updating resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.175 351492 DEBUG nova.compute.resource_tracker [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.176 351492 DEBUG oslo_concurrency.lockutils [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.176 351492 DEBUG nova.service [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.317 351492 DEBUG nova.service [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  3 01:41:49 compute-0 nova_compute[351485]: 2025-12-03 01:41:49.318 351492 DEBUG nova.servicegroup.drivers.db [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  3 01:41:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.151279503 +0000 UTC m=+0.085334073 container create e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.115937047 +0000 UTC m=+0.049991657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:41:50 compute-0 systemd[1]: Started libpod-conmon-e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e.scope.
Dec  3 01:41:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.307809245 +0000 UTC m=+0.241863855 container init e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.325018565 +0000 UTC m=+0.259073145 container start e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.331904807 +0000 UTC m=+0.265959367 container attach e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:41:50 compute-0 silly_pascal[352493]: 167 167
Dec  3 01:41:50 compute-0 systemd[1]: libpod-e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e.scope: Deactivated successfully.
Dec  3 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.337805512 +0000 UTC m=+0.271860082 container died e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-249719f639ffb87f5d005793d1001d631ca8b86f5f4fbafceadbd612adcb91d7-merged.mount: Deactivated successfully.
Dec  3 01:41:50 compute-0 podman[352479]: 2025-12-03 01:41:50.410193604 +0000 UTC m=+0.344248164 container remove e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:41:50 compute-0 systemd[1]: libpod-conmon-e105cd9bafb35bc1a0e944cec0ec3a2f95eb61a3d35c2b871ca99641b372677e.scope: Deactivated successfully.
Dec  3 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.660776611 +0000 UTC m=+0.076816296 container create 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.62565667 +0000 UTC m=+0.041696425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:41:50 compute-0 systemd[1]: Started libpod-conmon-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope.
Dec  3 01:41:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.834923364 +0000 UTC m=+0.250963079 container init 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.859210022 +0000 UTC m=+0.275249737 container start 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:41:50 compute-0 podman[352518]: 2025-12-03 01:41:50.865371004 +0000 UTC m=+0.281410689 container attach 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:41:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:51 compute-0 systemd-logind[800]: New session 57 of user zuul.
Dec  3 01:41:51 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]: {
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "osd_id": 2,
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "type": "bluestore"
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:    },
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "osd_id": 1,
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "type": "bluestore"
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:    },
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "osd_id": 0,
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:        "type": "bluestore"
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]:    }
Dec  3 01:41:52 compute-0 vigilant_feynman[352534]: }
Dec  3 01:41:52 compute-0 systemd[1]: libpod-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope: Deactivated successfully.
Dec  3 01:41:52 compute-0 systemd[1]: libpod-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope: Consumed 1.287s CPU time.
Dec  3 01:41:52 compute-0 podman[352518]: 2025-12-03 01:41:52.15476514 +0000 UTC m=+1.570804845 container died 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 01:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6d9ed99e9cf51c421b3f1abd49aea94ca5526626881c62133d924f646526e17-merged.mount: Deactivated successfully.
Dec  3 01:41:52 compute-0 podman[352518]: 2025-12-03 01:41:52.253683042 +0000 UTC m=+1.669722757 container remove 777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:41:52 compute-0 systemd[1]: libpod-conmon-777cfbbacb108ac7bcbb636baf5dfcfd8efc6bb5cb9a8d0c68c98e7e99d4e26d.scope: Deactivated successfully.
Dec  3 01:41:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:41:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:41:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:41:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:41:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 19d465ef-3097-4daf-bc4e-ca36496b51ee does not exist
Dec  3 01:41:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9f74c3a4-277c-46b0-a00f-099ea3520360 does not exist
Dec  3 01:41:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:53 compute-0 python3.9[352781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:41:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:41:53 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:41:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:41:53 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:41:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:55 compute-0 python3.9[352938]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:41:55 compute-0 systemd[1]: Reloading.
Dec  3 01:41:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:41:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:41:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:56 compute-0 podman[353080]: 2025-12-03 01:41:56.864236109 +0000 UTC m=+0.101401223 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:41:56 compute-0 podman[353094]: 2025-12-03 01:41:56.892062776 +0000 UTC m=+0.119631682 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  3 01:41:56 compute-0 podman[353092]: 2025-12-03 01:41:56.909858813 +0000 UTC m=+0.146858142 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41)
Dec  3 01:41:56 compute-0 podman[353097]: 2025-12-03 01:41:56.930190631 +0000 UTC m=+0.151786060 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:41:57 compute-0 python3.9[353199]: ansible-ansible.builtin.service_facts Invoked
Dec  3 01:41:57 compute-0 network[353223]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 01:41:57 compute-0 network[353224]: 'network-scripts' will be removed from distribution in near future.
Dec  3 01:41:57 compute-0 network[353225]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 01:41:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:41:58 compute-0 podman[353232]: 2025-12-03 01:41:58.444085716 +0000 UTC m=+0.101253649 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:41:58 compute-0 podman[353234]: 2025-12-03 01:41:58.473839657 +0000 UTC m=+0.119891479 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:41:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:41:59.601 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:41:59.601 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:41:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:41:59 compute-0 podman[158098]: time="2025-12-03T01:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec  3 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:42:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:42:01 compute-0 openstack_network_exporter[160250]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:42:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:42:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:04 compute-0 python3.9[353539]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:42:04 compute-0 podman[353565]: 2025-12-03 01:42:04.877404892 +0000 UTC m=+0.130213227 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm)
Dec  3 01:42:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:05 compute-0 python3.9[353710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:06 compute-0 python3.9[353862]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:08 compute-0 python3.9[354016]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:42:09 compute-0 python3.9[354170]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 01:42:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:10 compute-0 python3.9[354322]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 01:42:10 compute-0 systemd[1]: Reloading.
Dec  3 01:42:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 01:42:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 01:42:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:12 compute-0 python3.9[354510]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:42:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:13 compute-0 python3.9[354665]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:42:14 compute-0 python3.9[354815]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:42:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:16 compute-0 python3.9[354967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:16 compute-0 python3.9[355043]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:42:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:17 compute-0 podman[355167]: 2025-12-03 01:42:17.767883544 +0000 UTC m=+0.109155219 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:42:17 compute-0 podman[355168]: 2025-12-03 01:42:17.810917966 +0000 UTC m=+0.147194202 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  3 01:42:17 compute-0 python3.9[355235]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  3 01:42:19 compute-0 python3.9[355388]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  3 01:42:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:20 compute-0 python3.9[355540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:21 compute-0 python3.9[355616]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:22 compute-0 python3.9[355766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:23 compute-0 python3.9[355842]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:24 compute-0 python3.9[355992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:25 compute-0 python3.9[356068]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:26 compute-0 python3.9[356218]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:42:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:27 compute-0 podman[356345]: 2025-12-03 01:42:27.69377472 +0000 UTC m=+0.109392025 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec  3 01:42:27 compute-0 podman[356344]: 2025-12-03 01:42:27.696837326 +0000 UTC m=+0.127052189 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:42:27 compute-0 podman[356346]: 2025-12-03 01:42:27.714110948 +0000 UTC m=+0.130811014 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm)
Dec  3 01:42:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:27 compute-0 podman[356352]: 2025-12-03 01:42:27.756898983 +0000 UTC m=+0.160277337 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:42:27 compute-0 python3.9[356440]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:42:28
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:42:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:42:28 compute-0 podman[356583]: 2025-12-03 01:42:28.816497181 +0000 UTC m=+0.125741742 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:42:28 compute-0 podman[356584]: 2025-12-03 01:42:28.833950718 +0000 UTC m=+0.137840280 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 01:42:28 compute-0 python3.9[356640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:29 compute-0 podman[158098]: time="2025-12-03T01:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec  3 01:42:30 compute-0 python3.9[356724]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:42:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:42:31 compute-0 openstack_network_exporter[160250]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:42:31 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:42:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:31 compute-0 python3.9[356874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:32 compute-0 python3.9[356950]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:33 compute-0 nova_compute[351485]: 2025-12-03 01:42:33.322 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:33 compute-0 nova_compute[351485]: 2025-12-03 01:42:33.344 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:33 compute-0 python3.9[357100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:34 compute-0 python3.9[357176]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:35 compute-0 podman[357300]: 2025-12-03 01:42:35.373317366 +0000 UTC m=+0.117611525 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, container_name=kepler, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 01:42:35 compute-0 python3.9[357343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:36 compute-0 python3.9[357422]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:37 compute-0 python3.9[357572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 python3.9[357648]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:42:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:42:39 compute-0 python3.9[357798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:39 compute-0 python3.9[357874]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:40 compute-0 python3.9[358024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:41 compute-0 python3.9[358100]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:42 compute-0 python3.9[358250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:43 compute-0 python3.9[358326]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:44 compute-0 python3.9[358476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:45 compute-0 python3.9[358552]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.579 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.581 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.581 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.582 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:42:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.597 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.598 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.600 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.601 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.602 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.603 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.636 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.637 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:42:45 compute-0 nova_compute[351485]: 2025-12-03 01:42:45.638 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:42:46 compute-0 python3.9[358722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:42:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084696605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.183 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.750 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.753 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.754 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.755 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:42:46 compute-0 python3.9[358800]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.848 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.849 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:42:46 compute-0 nova_compute[351485]: 2025-12-03 01:42:46.884 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:42:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1244810326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.371 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.385 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.402 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:42:47 compute-0 nova_compute[351485]: 2025-12-03 01:42:47.407 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:42:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:47 compute-0 python3.9[358972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:47 compute-0 podman[358974]: 2025-12-03 01:42:47.977679838 +0000 UTC m=+0.109067496 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 01:42:47 compute-0 podman[358976]: 2025-12-03 01:42:47.994855348 +0000 UTC m=+0.125149776 container health_status 7fad237e83203b5eedaa8e569aef182bad91144a6250aff77a20e5e0e95db23a (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:42:48 compute-0 python3.9[359089]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:49 compute-0 python3.9[359239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:51 compute-0 python3.9[359316]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:52 compute-0 python3.9[359466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:53 compute-0 python3.9[359655]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:42:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d02b211b-125d-4427-8a61-5db8b9d40e5a does not exist
Dec  3 01:42:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0ad94931-b458-4247-b136-f259396f8504 does not exist
Dec  3 01:42:53 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ac0d9c19-5a8b-4f82-960e-0d9ce869c3d9 does not exist
Dec  3 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:42:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:42:54 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:42:54 compute-0 python3.9[359923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.061933341 +0000 UTC m=+0.084028787 container create cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.024260959 +0000 UTC m=+0.046356415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:42:55 compute-0 systemd[1]: Started libpod-conmon-cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a.scope.
Dec  3 01:42:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.205257824 +0000 UTC m=+0.227353320 container init cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.215720686 +0000 UTC m=+0.237816092 container start cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.220019866 +0000 UTC m=+0.242115322 container attach cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:42:55 compute-0 gallant_ptolemy[360032]: 167 167
Dec  3 01:42:55 compute-0 systemd[1]: libpod-cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a.scope: Deactivated successfully.
Dec  3 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.22875257 +0000 UTC m=+0.250847986 container died cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-673e39b706c007f5833a829b53e28a13ebce7dd4d94780bfcb5864f84be60c0b-merged.mount: Deactivated successfully.
Dec  3 01:42:55 compute-0 podman[359986]: 2025-12-03 01:42:55.308835746 +0000 UTC m=+0.330931182 container remove cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ptolemy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:42:55 compute-0 systemd[1]: libpod-conmon-cf1c46957882355076149a717166b40e7f7de6b8384ef3ec752bdfb3c1e3c85a.scope: Deactivated successfully.
Dec  3 01:42:55 compute-0 python3.9[360056]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.618867724 +0000 UTC m=+0.094663405 container create 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.579208486 +0000 UTC m=+0.055004237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:42:55 compute-0 systemd[1]: Started libpod-conmon-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope.
Dec  3 01:42:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.808305954 +0000 UTC m=+0.284101695 container init 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.833064225 +0000 UTC m=+0.308859916 container start 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:42:55 compute-0 podman[360081]: 2025-12-03 01:42:55.847429126 +0000 UTC m=+0.323224817 container attach 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:42:56 compute-0 python3.9[360249]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:57 compute-0 happy_sanderson[360140]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:42:57 compute-0 happy_sanderson[360140]: --> relative data size: 1.0
Dec  3 01:42:57 compute-0 happy_sanderson[360140]: --> All data devices are unavailable
Dec  3 01:42:57 compute-0 podman[360081]: 2025-12-03 01:42:57.148866997 +0000 UTC m=+1.624662688 container died 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:42:57 compute-0 systemd[1]: libpod-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope: Deactivated successfully.
Dec  3 01:42:57 compute-0 systemd[1]: libpod-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope: Consumed 1.250s CPU time.
Dec  3 01:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1af4da527150caca81daef6e9e5d55c90536638b7275f68daac6b32a2823f05-merged.mount: Deactivated successfully.
Dec  3 01:42:57 compute-0 podman[360081]: 2025-12-03 01:42:57.246510754 +0000 UTC m=+1.722306405 container remove 9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:42:57 compute-0 systemd[1]: libpod-conmon-9254b44e1f4b16680fb35382642c5b4d9dc8530c9ab137542816fcbb54968338.scope: Deactivated successfully.
Dec  3 01:42:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:57 compute-0 python3.9[360472]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:42:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:42:57 compute-0 podman[360535]: 2025-12-03 01:42:57.864320196 +0000 UTC m=+0.110816776 container health_status 3fe787eef5f4d86df3ff138818a5095af0ab8edef01f7c0ce5ed7d0428fcef44 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 01:42:57 compute-0 podman[360545]: 2025-12-03 01:42:57.864319416 +0000 UTC m=+0.094679835 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 01:42:57 compute-0 podman[360542]: 2025-12-03 01:42:57.88272369 +0000 UTC m=+0.125109385 container health_status 0bac6ea38f446a4c6aa3dedd2a61f6d8f4f8e18a82db9b8d190f2dbd6827ebeb (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:42:57 compute-0 podman[360577]: 2025-12-03 01:42:57.935411671 +0000 UTC m=+0.120890637 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  3 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.269746947 +0000 UTC m=+0.086069474 container create 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:42:58 compute-0 systemd[1]: Started libpod-conmon-3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330.scope.
Dec  3 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.231319714 +0000 UTC m=+0.047642291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:42:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.394733058 +0000 UTC m=+0.211055625 container init 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.413691957 +0000 UTC m=+0.230014474 container start 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.421350481 +0000 UTC m=+0.237672998 container attach 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:42:58 compute-0 reverent_merkle[360794]: 167 167
Dec  3 01:42:58 compute-0 systemd[1]: libpod-3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330.scope: Deactivated successfully.
Dec  3 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.425327922 +0000 UTC m=+0.241650469 container died 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bdebad2f8129f78f9181e9aa367ea6b5097ffde8fead8b3e98a13d29c8f164e-merged.mount: Deactivated successfully.
Dec  3 01:42:58 compute-0 podman[360745]: 2025-12-03 01:42:58.506413386 +0000 UTC m=+0.322735883 container remove 3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:42:58 compute-0 systemd[1]: libpod-conmon-3651b10f78cb5211c2fe989ec4346d78ba907390f2489f51d140953303ec1330.scope: Deactivated successfully.
Dec  3 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.780658094 +0000 UTC m=+0.103820180 container create 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:42:58 compute-0 python3.9[360851]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.745463562 +0000 UTC m=+0.068625718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:42:58 compute-0 systemd[1]: Started libpod-conmon-3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223.scope.
Dec  3 01:42:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.953216473 +0000 UTC m=+0.276378579 container init 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.968085298 +0000 UTC m=+0.291247394 container start 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 01:42:58 compute-0 podman[360857]: 2025-12-03 01:42:58.983233691 +0000 UTC m=+0.306395787 container attach 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:42:59 compute-0 podman[360883]: 2025-12-03 01:42:59.059138891 +0000 UTC m=+0.159636439 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 01:42:59 compute-0 podman[360880]: 2025-12-03 01:42:59.070414596 +0000 UTC m=+0.170561244 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:42:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:42:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:42:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:42:59.602 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:42:59 compute-0 distracted_albattani[360879]: {
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:    "0": [
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:        {
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "devices": [
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "/dev/loop3"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            ],
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_name": "ceph_lv0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_size": "21470642176",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "name": "ceph_lv0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "tags": {
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cluster_name": "ceph",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.crush_device_class": "",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.encrypted": "0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osd_id": "0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.type": "block",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.vdo": "0"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            },
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "type": "block",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "vg_name": "ceph_vg0"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:        }
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:    ],
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:    "1": [
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:        {
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "devices": [
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "/dev/loop4"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            ],
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_name": "ceph_lv1",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_size": "21470642176",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "name": "ceph_lv1",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "tags": {
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cluster_name": "ceph",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.crush_device_class": "",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.encrypted": "0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osd_id": "1",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.type": "block",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.vdo": "0"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            },
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "type": "block",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "vg_name": "ceph_vg1"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:        }
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:    ],
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:    "2": [
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:        {
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "devices": [
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "/dev/loop5"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            ],
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_name": "ceph_lv2",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_size": "21470642176",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "name": "ceph_lv2",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "tags": {
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.cluster_name": "ceph",
Dec  3 01:42:59 compute-0 podman[158098]: time="2025-12-03T01:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.crush_device_class": "",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.encrypted": "0",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osd_id": "2",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.type": "block",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:                "ceph.vdo": "0"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            },
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "type": "block",
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:            "vg_name": "ceph_vg2"
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:        }
Dec  3 01:42:59 compute-0 distracted_albattani[360879]:    ]
Dec  3 01:42:59 compute-0 distracted_albattani[360879]: }
Dec  3 01:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44154 "" "Go-http-client/1.1"
Dec  3 01:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8551 "" "Go-http-client/1.1"
Dec  3 01:42:59 compute-0 systemd[1]: libpod-3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223.scope: Deactivated successfully.
Dec  3 01:42:59 compute-0 podman[360857]: 2025-12-03 01:42:59.781185024 +0000 UTC m=+1.104347110 container died 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b9ca804d42470950cde2bc4865cc62ba5332365aa40ea2bcd5cc4571263298c-merged.mount: Deactivated successfully.
Dec  3 01:42:59 compute-0 podman[360857]: 2025-12-03 01:42:59.869032777 +0000 UTC m=+1.192194863 container remove 3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 01:42:59 compute-0 systemd[1]: libpod-conmon-3a0a7e3b2d8b811608b8ca8be069aed7f0d83f23143586051944ca5eda262223.scope: Deactivated successfully.
Dec  3 01:43:00 compute-0 python3.9[361081]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:43:00 compute-0 podman[361299]: 2025-12-03 01:43:00.989894556 +0000 UTC m=+0.091788125 container create 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:00.956302518 +0000 UTC m=+0.058196147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:43:01 compute-0 systemd[1]: Started libpod-conmon-0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60.scope.
Dec  3 01:43:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.154852342 +0000 UTC m=+0.256745921 container init 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.172088173 +0000 UTC m=+0.273981742 container start 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.178988916 +0000 UTC m=+0.280882475 container attach 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:43:01 compute-0 peaceful_einstein[361355]: 167 167
Dec  3 01:43:01 compute-0 systemd[1]: libpod-0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60.scope: Deactivated successfully.
Dec  3 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.184239253 +0000 UTC m=+0.286132812 container died 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-91cc5adbb56cba453cbf7f82ea1aa5aa62278f91215f88143b0c3ea89b5e06d5-merged.mount: Deactivated successfully.
Dec  3 01:43:01 compute-0 podman[361299]: 2025-12-03 01:43:01.254745631 +0000 UTC m=+0.356639160 container remove 0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:43:01 compute-0 systemd[1]: libpod-conmon-0620c04a31cddbbac1eca6f0be83243a21859f317c72a637bab4a3fbfe78ac60.scope: Deactivated successfully.
Dec  3 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:43:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:43:01 compute-0 openstack_network_exporter[160250]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:43:01 compute-0 openstack_network_exporter[160250]: 
Dec  3 01:43:01 compute-0 python3.9[361409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.516218363 +0000 UTC m=+0.079007377 container create 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Dec  3 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.476164574 +0000 UTC m=+0.038953638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:43:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:43:01 compute-0 systemd[1]: Started libpod-conmon-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope.
Dec  3 01:43:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.663207258 +0000 UTC m=+0.225996242 container init 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.683833494 +0000 UTC m=+0.246622488 container start 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:43:01 compute-0 podman[361415]: 2025-12-03 01:43:01.690688225 +0000 UTC m=+0.253477309 container attach 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:43:02 compute-0 python3.9[361512]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:43:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:43:02 compute-0 sharp_solomon[361435]: {
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "osd_id": 2,
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "type": "bluestore"
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:    },
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "osd_id": 1,
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "type": "bluestore"
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:    },
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "osd_id": 0,
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:        "type": "bluestore"
Dec  3 01:43:02 compute-0 sharp_solomon[361435]:    }
Dec  3 01:43:02 compute-0 sharp_solomon[361435]: }
Dec  3 01:43:02 compute-0 systemd[1]: libpod-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope: Deactivated successfully.
Dec  3 01:43:02 compute-0 systemd[1]: libpod-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope: Consumed 1.156s CPU time.
Dec  3 01:43:02 compute-0 podman[361415]: 2025-12-03 01:43:02.836199953 +0000 UTC m=+1.398988997 container died 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 01:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3def53c430f1ca87cd5aba578a9579cd1c0954fa01f8adf5d5af8f6157975a56-merged.mount: Deactivated successfully.
Dec  3 01:43:02 compute-0 podman[361415]: 2025-12-03 01:43:02.93922662 +0000 UTC m=+1.502015644 container remove 21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_solomon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec  3 01:43:02 compute-0 systemd[1]: libpod-conmon-21e57be06f65c714742e1dcd084501d04490c3464d166074a448a2f6a5e571fe.scope: Deactivated successfully.
Dec  3 01:43:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:43:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:43:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:43:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:43:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a3974e0-ce99-4bf7-97f3-c74e18382241 does not exist
Dec  3 01:43:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5675ba5c-d95e-4aaf-b354-32ef133a5207 does not exist
Dec  3 01:43:03 compute-0 python3.9[361630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:43:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:43:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:43:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:11 compute-0 python3.9[384593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:11 compute-0 rsyslogd[188612]: imjournal: 2988 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 01:46:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:11 compute-0 python3.9[384669]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:13 compute-0 python3.9[384819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:14 compute-0 python3.9[384899]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json _original_basename=ceilometer_agent_ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:15 compute-0 python3.9[385049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:17 compute-0 python3.9[385125]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:18 compute-0 python3.9[385277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:18 compute-0 python3.9[385353]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:19 compute-0 python3.9[385504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:20 compute-0 podman[385554]: 2025-12-03 01:46:20.36087578 +0000 UTC m=+0.122712432 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:46:20 compute-0 python3.9[385597]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json _original_basename=kepler.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:21 compute-0 podman[385728]: 2025-12-03 01:46:21.556454513 +0000 UTC m=+0.105302456 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 01:46:21 compute-0 podman[385729]: 2025-12-03 01:46:21.602435925 +0000 UTC m=+0.144223376 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:46:21 compute-0 python3.9[385787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:22 compute-0 python3.9[385866]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:23 compute-0 python3.9[386018]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:24 compute-0 python3.9[386170]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:25 compute-0 python3.9[386322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:46:27 compute-0 python3.9[386474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:28 compute-0 python3.9[386552]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:46:28
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'images']
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:46:28 compute-0 python3.9[386728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:46:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:46:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7001d72-4bab-46e5-822a-8b7ef92081f5 does not exist
Dec  3 01:46:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6faa3b9d-a720-47f3-88ae-b338e4e56fa2 does not exist
Dec  3 01:46:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b03ba713-46ed-42df-aca2-1f36ef899721 does not exist
Dec  3 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:46:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:46:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:46:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:46:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:29 compute-0 podman[158098]: time="2025-12-03T01:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:46:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:46:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec  3 01:46:30 compute-0 python3.9[386937]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.392167072 +0000 UTC m=+0.071776869 container create 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.361137677 +0000 UTC m=+0.040747454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:46:30 compute-0 systemd[1]: Started libpod-conmon-7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e.scope.
Dec  3 01:46:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.539227879 +0000 UTC m=+0.218837666 container init 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.558404986 +0000 UTC m=+0.238014783 container start 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.565473047 +0000 UTC m=+0.245082834 container attach 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:46:30 compute-0 flamboyant_heyrovsky[387040]: 167 167
Dec  3 01:46:30 compute-0 systemd[1]: libpod-7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e.scope: Deactivated successfully.
Dec  3 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.571934052 +0000 UTC m=+0.251543839 container died 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 01:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea0f5c0d52f8b167ce254ce3711824991be8b47b7c668006cbf4fbc1268cfe9-merged.mount: Deactivated successfully.
Dec  3 01:46:30 compute-0 podman[387001]: 2025-12-03 01:46:30.662517766 +0000 UTC m=+0.342127533 container remove 7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:46:30 compute-0 systemd[1]: libpod-conmon-7d2a8cca1fe0c50a4c6f479e7286c8c4d370c23d9c9da06632e8f60e7e19de3e.scope: Deactivated successfully.
Dec  3 01:46:30 compute-0 podman[387135]: 2025-12-03 01:46:30.969754363 +0000 UTC m=+0.109170906 container create 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:30.915609338 +0000 UTC m=+0.055025951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:46:31 compute-0 systemd[1]: Started libpod-conmon-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope.
Dec  3 01:46:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:31.152333733 +0000 UTC m=+0.291750376 container init 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:31.165972402 +0000 UTC m=+0.305388945 container start 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:46:31 compute-0 podman[387135]: 2025-12-03 01:46:31.171001295 +0000 UTC m=+0.310417848 container attach 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:46:31 compute-0 podman[387173]: 2025-12-03 01:46:31.238178182 +0000 UTC m=+0.198527656 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 01:46:31 compute-0 python3.9[387194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:46:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:46:31 compute-0 openstack_network_exporter[368278]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:46:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:46:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:32 compute-0 python3.9[387288]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/kepler/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/kepler/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 01:46:32 compute-0 upbeat_poincare[387187]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:46:32 compute-0 upbeat_poincare[387187]: --> relative data size: 1.0
Dec  3 01:46:32 compute-0 upbeat_poincare[387187]: --> All data devices are unavailable
Dec  3 01:46:32 compute-0 systemd[1]: libpod-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope: Deactivated successfully.
Dec  3 01:46:32 compute-0 systemd[1]: libpod-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope: Consumed 1.287s CPU time.
Dec  3 01:46:32 compute-0 podman[387350]: 2025-12-03 01:46:32.585783152 +0000 UTC m=+0.049863033 container died 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-84070fb2b07c55b51b55554adca6f838b5842f439d849852d628d273caedc0ec-merged.mount: Deactivated successfully.
Dec  3 01:46:32 compute-0 podman[387365]: 2025-12-03 01:46:32.672676652 +0000 UTC m=+0.095151906 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:46:32 compute-0 podman[387350]: 2025-12-03 01:46:32.685058365 +0000 UTC m=+0.149138186 container remove 1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:46:32 compute-0 systemd[1]: libpod-conmon-1bb7395a03c8189a80e2207a1091f140cdd291d8ae889ecbba4cb4d3743941ea.scope: Deactivated successfully.
Dec  3 01:46:32 compute-0 podman[387351]: 2025-12-03 01:46:32.703066439 +0000 UTC m=+0.128051435 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 01:46:32 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 01:46:32 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.service: Failed with result 'exit-code'.
Dec  3 01:46:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:33 compute-0 python3.9[387611]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  3 01:46:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:33 compute-0 podman[387675]: 2025-12-03 01:46:33.86736076 +0000 UTC m=+0.117291728 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal)
Dec  3 01:46:33 compute-0 podman[387683]: 2025-12-03 01:46:33.883663165 +0000 UTC m=+0.091184143 container create dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:46:33 compute-0 podman[387683]: 2025-12-03 01:46:33.844973771 +0000 UTC m=+0.052494829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:46:33 compute-0 systemd[1]: Started libpod-conmon-dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2.scope.
Dec  3 01:46:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.010068262 +0000 UTC m=+0.217589320 container init dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.029799955 +0000 UTC m=+0.237320953 container start dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.038353699 +0000 UTC m=+0.245874747 container attach dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:46:34 compute-0 wonderful_haslett[387751]: 167 167
Dec  3 01:46:34 compute-0 systemd[1]: libpod-dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2.scope: Deactivated successfully.
Dec  3 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.041593191 +0000 UTC m=+0.249114179 container died dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5676bcf9d04427aa69109cac317c3eecc5f2bd1d35f1798fd0b921df1dcbff1f-merged.mount: Deactivated successfully.
Dec  3 01:46:34 compute-0 podman[387683]: 2025-12-03 01:46:34.1277547 +0000 UTC m=+0.335275698 container remove dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:46:34 compute-0 systemd[1]: libpod-conmon-dd954e1959f47969cab4d33ac83ece90781ab223c04f98591578aab1aaaf29f2.scope: Deactivated successfully.
Dec  3 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.418497856 +0000 UTC m=+0.088205918 container create be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.38501197 +0000 UTC m=+0.054720072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:46:34 compute-0 systemd[1]: Started libpod-conmon-be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6.scope.
Dec  3 01:46:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.613495629 +0000 UTC m=+0.283203731 container init be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.644072632 +0000 UTC m=+0.313780684 container start be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:46:34 compute-0 podman[387786]: 2025-12-03 01:46:34.651767382 +0000 UTC m=+0.321475474 container attach be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:46:35 compute-0 python3.9[387881]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 01:46:35 compute-0 focused_kilby[387824]: {
Dec  3 01:46:35 compute-0 focused_kilby[387824]:    "0": [
Dec  3 01:46:35 compute-0 focused_kilby[387824]:        {
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "devices": [
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "/dev/loop3"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            ],
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_name": "ceph_lv0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_size": "21470642176",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "name": "ceph_lv0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "tags": {
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cluster_name": "ceph",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.crush_device_class": "",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.encrypted": "0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osd_id": "0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.type": "block",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.vdo": "0"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            },
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "type": "block",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "vg_name": "ceph_vg0"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:        }
Dec  3 01:46:35 compute-0 focused_kilby[387824]:    ],
Dec  3 01:46:35 compute-0 focused_kilby[387824]:    "1": [
Dec  3 01:46:35 compute-0 focused_kilby[387824]:        {
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "devices": [
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "/dev/loop4"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            ],
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_name": "ceph_lv1",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_size": "21470642176",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "name": "ceph_lv1",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "tags": {
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cluster_name": "ceph",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.crush_device_class": "",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.encrypted": "0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osd_id": "1",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.type": "block",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.vdo": "0"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            },
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "type": "block",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "vg_name": "ceph_vg1"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:        }
Dec  3 01:46:35 compute-0 focused_kilby[387824]:    ],
Dec  3 01:46:35 compute-0 focused_kilby[387824]:    "2": [
Dec  3 01:46:35 compute-0 focused_kilby[387824]:        {
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "devices": [
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "/dev/loop5"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            ],
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_name": "ceph_lv2",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_size": "21470642176",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "name": "ceph_lv2",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "tags": {
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.cluster_name": "ceph",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.crush_device_class": "",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.encrypted": "0",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osd_id": "2",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.type": "block",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:                "ceph.vdo": "0"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            },
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "type": "block",
Dec  3 01:46:35 compute-0 focused_kilby[387824]:            "vg_name": "ceph_vg2"
Dec  3 01:46:35 compute-0 focused_kilby[387824]:        }
Dec  3 01:46:35 compute-0 focused_kilby[387824]:    ]
Dec  3 01:46:35 compute-0 focused_kilby[387824]: }
Dec  3 01:46:35 compute-0 systemd[1]: libpod-be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6.scope: Deactivated successfully.
Dec  3 01:46:35 compute-0 podman[387786]: 2025-12-03 01:46:35.512405347 +0000 UTC m=+1.182113409 container died be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d75fee44198d0c6e848943c833a17d375186e3d7df6bed78233693f3ba191cbb-merged.mount: Deactivated successfully.
Dec  3 01:46:35 compute-0 podman[387786]: 2025-12-03 01:46:35.622348694 +0000 UTC m=+1.292056726 container remove be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kilby, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:46:35 compute-0 systemd[1]: libpod-conmon-be5afb9c57d166bbbd8170986db49aadfbca7da9499098ca08009e010a304ab6.scope: Deactivated successfully.
Dec  3 01:46:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:36 compute-0 podman[388118]: 2025-12-03 01:46:36.449484695 +0000 UTC m=+0.131823712 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:46:36 compute-0 python3[388170]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 01:46:36 compute-0 podman[388222]: 2025-12-03 01:46:36.913945248 +0000 UTC m=+0.087848248 container create aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:46:36 compute-0 podman[388222]: 2025-12-03 01:46:36.886942447 +0000 UTC m=+0.060845457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:46:36 compute-0 systemd[1]: Started libpod-conmon-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope.
Dec  3 01:46:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.072125381 +0000 UTC m=+0.246028441 container init aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:46:37 compute-0 python3[388170]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc",#012          "Digest": "sha256:b2785dbc3ceaa930dff8068bbb8654af2e0b40a9c2632300641cb8348e9cf43d",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:b2785dbc3ceaa930dff8068bbb8654af2e0b40a9c2632300641cb8348e9cf43d"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-12-01T06:21:56.309143559Z",#012          "Config": {#012               "User": "ceilometer",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251125",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 506187128,#012          "VirtualSize": 506187128,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/821b44142d5812fced017a49e9cde2155fbb57b89e20e5e28a492c08b7bcc279/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/821b44142d5812fced017a49e9cde2155fbb57b89e20e5e28a492c08b7bcc279/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012                    "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012                    "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012                    "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",#012                    "sha256:fac9f22f4739f84f681c87b7458e8da1dae9a71bb9d7e632a7076d50c98f8070"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251125",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "ceilometer",#012          "History": [#012               {#012                    "created": "2025-11-25T04:02:36.223494528Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:36.223562059Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:39.054452717Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025707917Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025744608Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025767729Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025791379Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.02581523Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025867611Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.469442331Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:10:02.029095017Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012    
Dec  3 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.084823434 +0000 UTC m=+0.258726404 container start aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.090066023 +0000 UTC m=+0.263969003 container attach aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:46:37 compute-0 wizardly_carson[388258]: 167 167
Dec  3 01:46:37 compute-0 systemd[1]: libpod-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope: Deactivated successfully.
Dec  3 01:46:37 compute-0 conmon[388258]: conmon aa31783a576c1c3e7cd1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope/container/memory.events
Dec  3 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.10186674 +0000 UTC m=+0.275769740 container died aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b214b83176b03f109152f28d42a492ff550fd3df4774690adc21e8d1bec6b528-merged.mount: Deactivated successfully.
Dec  3 01:46:37 compute-0 podman[388222]: 2025-12-03 01:46:37.174795831 +0000 UTC m=+0.348698791 container remove aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:46:37 compute-0 systemd[1]: libpod-conmon-aa31783a576c1c3e7cd197713f1a5c77ee8105439a571628ab617335402074e5.scope: Deactivated successfully.
Dec  3 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.418064332 +0000 UTC m=+0.069889005 container create 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.382720593 +0000 UTC m=+0.034545306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:46:37 compute-0 systemd[1]: Started libpod-conmon-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope.
Dec  3 01:46:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.586794876 +0000 UTC m=+0.238619589 container init 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.615328371 +0000 UTC m=+0.267153004 container start 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:46:37 compute-0 podman[388326]: 2025-12-03 01:46:37.619482679 +0000 UTC m=+0.271307422 container attach 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 01:46:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:46:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:46:38 compute-0 python3.9[388473]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:46:38 compute-0 crazy_elion[388364]: {
Dec  3 01:46:38 compute-0 crazy_elion[388364]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "osd_id": 2,
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "type": "bluestore"
Dec  3 01:46:38 compute-0 crazy_elion[388364]:    },
Dec  3 01:46:38 compute-0 crazy_elion[388364]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "osd_id": 1,
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "type": "bluestore"
Dec  3 01:46:38 compute-0 crazy_elion[388364]:    },
Dec  3 01:46:38 compute-0 crazy_elion[388364]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "osd_id": 0,
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:46:38 compute-0 crazy_elion[388364]:        "type": "bluestore"
Dec  3 01:46:38 compute-0 crazy_elion[388364]:    }
Dec  3 01:46:38 compute-0 crazy_elion[388364]: }
Dec  3 01:46:38 compute-0 systemd[1]: libpod-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope: Deactivated successfully.
Dec  3 01:46:38 compute-0 systemd[1]: libpod-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope: Consumed 1.254s CPU time.
Dec  3 01:46:38 compute-0 podman[388580]: 2025-12-03 01:46:38.948239792 +0000 UTC m=+0.052868230 container died 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9468ff4cb5a1e2a5cdbcefdf25700e7c8dd0f2e2444ae0acf63b8d25d78edaa2-merged.mount: Deactivated successfully.
Dec  3 01:46:39 compute-0 podman[388580]: 2025-12-03 01:46:39.082811932 +0000 UTC m=+0.187440350 container remove 15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_elion, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:46:39 compute-0 systemd[1]: libpod-conmon-15b8ee9325ee0f7081ee5a96a3bc8c8e4d8ff68dbf0d727dab44d0776e631793.scope: Deactivated successfully.
Dec  3 01:46:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:46:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:46:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:46:39 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:46:39 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d7d63e2a-4d69-4afc-855c-4e90914f17c6 does not exist
Dec  3 01:46:39 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d41b1b59-cbeb-4946-bb46-88a1774e101e does not exist
Dec  3 01:46:39 compute-0 podman[388615]: 2025-12-03 01:46:39.175608419 +0000 UTC m=+0.120522240 container health_status 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, version=9.4, config_id=edpm, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Dec  3 01:46:39 compute-0 python3.9[388714]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:46:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:46:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:41 compute-0 python3.9[388889]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726399.5975723-391-198645089079497/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:43 compute-0 python3.9[388965]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:46:45 compute-0 python3.9[389119]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  3 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.601 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.603 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 01:46:45 compute-0 nova_compute[351485]: 2025-12-03 01:46:45.626 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:46 compute-0 python3.9[389271]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.645 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.646 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.679 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.680 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.680 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.681 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:46:47 compute-0 nova_compute[351485]: 2025-12-03 01:46:47.681 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:46:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:47 compute-0 python3[389423]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 01:46:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:46:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354050632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.165 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7",#012          "Digest": "sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086",#012          "RepoTags": [#012               "quay.io/sustainable_computing_io/kepler:release-0.7.12"#012          ],#012          "RepoDigests": [#012               "quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd",#012               "quay.io/sustainable_computing_io/kepler@sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2024-10-15T06:30:56.315982344Z",#012          "Config": {#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "container=oci",#012                    "NVIDIA_VISIBLE_DEVICES=all",#012                    "NVIDIA_DRIVER_CAPABILITIES=utility",#012                    "NVIDIA_MIG_MONITOR_DEVICES=all",#012                    "NVIDIA_MIG_CONFIG_DEVICES=all"#012               ],#012               "Entrypoint": [#012                    "/usr/bin/kepler"#012               ],#012               "Labels": {#012                    "architecture": "x86_64",#012                    "build-date": "2024-09-18T21:23:30",#012                    "com.redhat.component": "ubi9-container",#012                    "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012                    "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "distribution-scope": "public",#012                    "io.buildah.version": "1.29.0",#012                    "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "io.k8s.display-name": "Red Hat Universal Base Image 9",#012                    "io.openshift.expose-services": "",#012                    "io.openshift.tags": "base rhel9",#012                    "maintainer": "Red Hat, Inc.",#012                    "name": "ubi9",#012                    "release": "1214.1726694543",#012                    "release-0.7.12": "",#012                    "summary": "Provides the latest release of Red Hat Universal Base Image 9.",#012                    "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",#012                    "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",#012                    "vcs-type": "git",#012                    "vendor": "Red Hat, Inc.",#012                    "version": "9.4"#012               }#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 331545571,#012          "VirtualSize": 331545571,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/de1557109facda5eb038045e25371b06ad2baf5cf32c60a7fe84a603bee1e079/diff:/var/lib/containers/storage/overlay/725f7e4e3b8edde36f0bdcd313bbaf872dbe55b162264f8008ee3c09a0b89b66/diff:/var/lib/containers/storage/overlay/573769ea2305456dffa2f0674424aa020c1494387d36bcccb339788fd220d39b/diff:/var/lib/containers/storage/overlay/56a7d751d1997fb4e9fb31bd07356a0c9a7699a9bb524feeb3c7fe2b433b8223/diff:/var/lib/containers/storage/overlay/0560e6233aa93f1e1ac7bed53255811f32dc680869ef7f31dd630efc1203b853/diff:/var/lib/containers/storage/overlay/8d984035cdde48f32944ddaa464ac42d376faabc98415168800b2b8c9aec0930/diff:/var/lib/containers/storage/overlay/e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75",#012                    "sha256:f947b23b2d0723eac9b608b79e6d48e59d90f74958e05f2762295489e0088e86",#012                    "sha256:3bf6ab40cc16a103a087232c2c6a1a093dcb6141e70397de57907f5d00741429",#012                    "sha256:2f5269f1ade14b3b0806305a0b2d3efffe65a187b302789a50ac00bcb815b960",#012                    "sha256:413f5abb84bd1c03bdfd9c1e0dec8f4be92159c9c6116c4e44247efcdcc6b518",#012                    "sha256:60c06a2423851502fc43aec0680b91181b0d62b52812c019d3fc66f1546c4529",#012                    "sha256:323ce4bcad35618db6032dd5bfbd6c8ebb0cde882f730b19296d0ceaf5e39427",#012                    "sha256:270b3386a8e4a2127a32b007abfea7cb394ae1dee577ee7fefdbb79cd2bea856"#012               ]#012          },#012          "Labels": {#012               "architecture": "x86_64",#012               "build-date": "2024-09-18T21:23:30",#012               "com.redhat.component": "ubi9-container",#012               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012               "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "distribution-scope": "public",#012               "io.buildah.version": "1.29.0",#012               "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "io.k8s.display-name": "Red Hat Universal Base Image 9",#012               "io.openshift.expose-services": "",#012               "io.openshift.tags": "base rhel9",#012               "maintainer": "Red Hat, Inc.",#012               "name": "ubi9",#012               "release": "1214.1726694543",#012               "release-0.7.12": "",#012               "summary": "Provides the latest release of Red Hat Universal Base Image 9.",#012               "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",#012               "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",#012               "vcs-type": "git",#012               "vendor": "Red Hat, Inc.",#012               "version": "9.4"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.oci.image.manifest.v1+json",#012          "User": "",#012          "History": [#012               {#012                    "created": "2024-09-18T21:36:31.099323493Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:0067eb9f2ee25ab2d666a7639a85fe707b582902a09242761abf30c53664069b in / ",#012                    "empty_layer": true#012               },#012               {#012 
Dec  3 01:46:48 compute-0 kepler[177915]: I1203 01:46:48.424412       1 exporter.go:218] Received shutdown signal
Dec  3 01:46:48 compute-0 kepler[177915]: I1203 01:46:48.424801       1 exporter.go:226] Exiting...
Dec  3 01:46:48 compute-0 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Deactivated successfully.
Dec  3 01:46:48 compute-0 systemd[1]: libpod-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.scope: Consumed 41.410s CPU time.
Dec  3 01:46:48 compute-0 podman[389495]: 2025-12-03 01:46:48.625761741 +0000 UTC m=+0.287122813 container died 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Dec  3 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.timer: Deactivated successfully.
Dec  3 01:46:48 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687.
Dec  3 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Failed to open /run/systemd/transient/96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: No such file or directory
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.647 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.651 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.652 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.653 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-userdata-shm.mount: Deactivated successfully.
Dec  3 01:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-56bb532fcb66b2740ea57176a30adf601274f50f260afcb2d3f32777dc3ac537-merged.mount: Deactivated successfully.
Dec  3 01:46:48 compute-0 podman[389495]: 2025-12-03 01:46:48.685910057 +0000 UTC m=+0.347271149 container cleanup 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-type=git)
Dec  3 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop kepler
Dec  3 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.timer: Failed to open /run/systemd/transient/96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.timer: No such file or directory
Dec  3 01:46:48 compute-0 systemd[1]: 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: Failed to open /run/systemd/transient/96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687-691e7a48be3cc627.service: No such file or directory
Dec  3 01:46:48 compute-0 podman[389520]: 2025-12-03 01:46:48.774017961 +0000 UTC m=+0.067501427 container remove 96795046c9b6f850fef55ca39ee0923eb70f56410d43c560ae83f5651f00d687 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container)
Dec  3 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force kepler
Dec  3 01:46:48 compute-0 podman[389523]: Error: no container with name or ID "kepler" found: no such container
Dec  3 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Dec  3 01:46:48 compute-0 podman[389544]: Error: no container with name or ID "kepler" found: no such container
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.841 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.842 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Dec  3 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Failed with result 'exit-code'.
Dec  3 01:46:48 compute-0 podman[389543]: 2025-12-03 01:46:48.864942686 +0000 UTC m=+0.067371744 container create c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 01:46:48 compute-0 podman[389543]: 2025-12-03 01:46:48.829306709 +0000 UTC m=+0.031735857 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  3 01:46:48 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  3 01:46:48 compute-0 nova_compute[351485]: 2025-12-03 01:46:48.941 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 01:46:48 compute-0 systemd[1]: edpm_kepler.service: Scheduled restart job, restart counter is at 1.
Dec  3 01:46:48 compute-0 systemd[1]: Stopped kepler container.
Dec  3 01:46:48 compute-0 systemd[1]: Starting kepler container...
Dec  3 01:46:48 compute-0 systemd[1]: Started libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope.
Dec  3 01:46:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:49 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.
Dec  3 01:46:49 compute-0 podman[389567]: 2025-12-03 01:46:49.080161916 +0000 UTC m=+0.183017152 container init c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9)
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.092 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.092 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.112 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 01:46:49 compute-0 kepler[389583]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 01:46:49 compute-0 podman[389567]: 2025-12-03 01:46:49.120879388 +0000 UTC m=+0.223734654 container start c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, managed_by=edpm_ansible, name=ubi9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9)
Dec  3 01:46:49 compute-0 podman[389580]: kepler
Dec  3 01:46:49 compute-0 python3[389423]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start kepler
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.138398       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.138829       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.138999       1 config.go:295] kernel version: 5.14
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.140593       1 power.go:78] Unable to obtain power, use estimate method
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.140646       1 redfish.go:169] failed to get redfish credential file path
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.141438       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.141483       1 power.go:79] using none to obtain power
Dec  3 01:46:49 compute-0 kepler[389583]: E1203 01:46:49.141517       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  3 01:46:49 compute-0 kepler[389583]: E1203 01:46:49.141609       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  3 01:46:49 compute-0 kepler[389583]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.145904       1 exporter.go:84] Number of CPUs: 8
Dec  3 01:46:49 compute-0 systemd[1]: Started kepler container.
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.146 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.171 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:46:49 compute-0 podman[389602]: 2025-12-03 01:46:49.253312358 +0000 UTC m=+0.111583376 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible)
Dec  3 01:46:49 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-2387456e96475ea9.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 01:46:49 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-2387456e96475ea9.service: Failed with result 'exit-code'.
Dec  3 01:46:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:46:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561605973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.661 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.671 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.693 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.696 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:46:49 compute-0 nova_compute[351485]: 2025-12-03 01:46:49.696 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:46:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.806735       1 watcher.go:83] Using in cluster k8s config
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.806801       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  3 01:46:49 compute-0 kepler[389583]: E1203 01:46:49.806913       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.813794       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.813868       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.820995       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.821058       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.841789       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.841856       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.841880       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857078       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857150       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857160       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857168       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857179       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857201       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857395       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857441       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857476       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857502       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.857793       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  3 01:46:49 compute-0 kepler[389583]: I1203 01:46:49.858684       1 exporter.go:208] Started Kepler in 720.841417ms
Dec  3 01:46:50 compute-0 python3.9[389832]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.628 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.629 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.630 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.808 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.809 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.809 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:50 compute-0 nova_compute[351485]: 2025-12-03 01:46:50.810 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:50 compute-0 podman[389859]: 2025-12-03 01:46:50.867425523 +0000 UTC m=+0.120641433 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:46:51 compute-0 nova_compute[351485]: 2025-12-03 01:46:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:46:51 compute-0 python3.9[390009]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:51 compute-0 podman[390010]: 2025-12-03 01:46:51.898342819 +0000 UTC m=+0.142202938 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:46:51 compute-0 podman[390011]: 2025-12-03 01:46:51.920142851 +0000 UTC m=+0.154241572 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  3 01:46:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:52 compute-0 python3.9[390197]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764726411.8950243-449-264263953144331/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:46:53 compute-0 python3.9[390273]: ansible-systemd Invoked with state=started name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 01:46:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:55 compute-0 python3.9[390427]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:46:55 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  3 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.298 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  3 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.401 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  3 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.401 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  3 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.402 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  3 01:46:55 compute-0 ceilometer_agent_ipmi[177659]: 2025-12-03 01:46:55.413 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  3 01:46:55 compute-0 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec  3 01:46:55 compute-0 systemd[1]: libpod-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Consumed 3.715s CPU time.
Dec  3 01:46:55 compute-0 podman[390431]: 2025-12-03 01:46:55.585027881 +0000 UTC m=+0.362922876 container died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:46:55 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-64bdef25bfa2e2e5.timer: Deactivated successfully.
Dec  3 01:46:55 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec  3 01:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-userdata-shm.mount: Deactivated successfully.
Dec  3 01:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889-merged.mount: Deactivated successfully.
Dec  3 01:46:55 compute-0 podman[390431]: 2025-12-03 01:46:55.655582464 +0000 UTC m=+0.433477419 container cleanup ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 01:46:55 compute-0 podman[390431]: ceilometer_agent_ipmi
Dec  3 01:46:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:55 compute-0 podman[390459]: ceilometer_agent_ipmi
Dec  3 01:46:55 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  3 01:46:55 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  3 01:46:55 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  3 01:46:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6ab5567927337a784d4e1fad456ca1db68e67b38a0f6ac3c208559879cc889/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  3 01:46:56 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.
Dec  3 01:46:56 compute-0 podman[390471]: 2025-12-03 01:46:56.09913381 +0000 UTC m=+0.272771524 container init ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + sudo -E kolla_set_configs
Dec  3 01:46:56 compute-0 podman[390471]: 2025-12-03 01:46:56.146280076 +0000 UTC m=+0.319917810 container start ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:46:56 compute-0 podman[390471]: ceilometer_agent_ipmi
Dec  3 01:46:56 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Validating config file
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying service configuration files
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: INFO:__main__:Writing out command to execute
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: ++ cat /run_command
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + ARGS=
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + sudo kolla_copy_cacerts
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + [[ ! -n '' ]]
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + . kolla_extend_start
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + umask 0022
Dec  3 01:46:56 compute-0 ceilometer_agent_ipmi[390486]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  3 01:46:56 compute-0 podman[390493]: 2025-12-03 01:46:56.278473397 +0000 UTC m=+0.113255482 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:46:56 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 01:46:56 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Failed with result 'exit-code'.
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.154 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.155 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.156 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.157 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.158 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.159 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.160 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.161 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.162 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.163 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.164 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.165 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.166 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.167 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.168 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.169 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.190 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.193 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.194 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.220 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpp0in8a1p/privsep.sock']
Dec  3 01:46:57 compute-0 python3.9[390675]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 01:46:57 compute-0 systemd[1]: Stopping kepler container...
Dec  3 01:46:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:46:57 compute-0 kepler[389583]: I1203 01:46:57.849904       1 exporter.go:218] Received shutdown signal
Dec  3 01:46:57 compute-0 kepler[389583]: I1203 01:46:57.851757       1 exporter.go:226] Exiting...
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.902 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.903 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp0in8a1p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.789 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.796 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.800 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  3 01:46:57 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:57.801 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.045 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.045 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.047 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.048 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.049 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.050 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.050 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.056 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.057 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.058 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.059 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.059 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.059 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.060 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.060 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.060 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.061 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 systemd[1]: libpod-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.062 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 systemd[1]: libpod-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Consumed 1.060s CPU time.
Dec  3 01:46:58 compute-0 podman[390680]: 2025-12-03 01:46:58.064079635 +0000 UTC m=+0.295050599 container stop c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 conmon[389583]: conmon c095e31a3195bbbfc2a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope/container/memory.events
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.065 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.065 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.065 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.066 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.067 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.067 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.068 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.068 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.069 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.070 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 podman[390680]: 2025-12-03 01:46:58.071215339 +0000 UTC m=+0.302186223 container died c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.071 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.072 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.072 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.073 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.075 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.078 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.079 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.080 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.081 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.082 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-2387456e96475ea9.timer: Deactivated successfully.
Dec  3 01:46:58 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.083 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.084 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.085 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.085 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.088 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.095 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.095 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.095 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.096 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  3 01:46:58 compute-0 ceilometer_agent_ipmi[390486]: 2025-12-03 01:46:58.099 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  3 01:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-userdata-shm.mount: Deactivated successfully.
Dec  3 01:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5fee0233e554adad84cb1c80469097f5dd66633b17399db4710d1dfcd084939-merged.mount: Deactivated successfully.
Dec  3 01:46:58 compute-0 podman[390680]: 2025-12-03 01:46:58.120993679 +0000 UTC m=+0.351964563 container cleanup c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 01:46:58 compute-0 podman[390680]: kepler
Dec  3 01:46:58 compute-0 systemd[1]: libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec  3 01:46:58 compute-0 podman[390712]: kepler
Dec  3 01:46:58 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  3 01:46:58 compute-0 systemd[1]: Stopped kepler container.
Dec  3 01:46:58 compute-0 systemd[1]: Starting kepler container...
Dec  3 01:46:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:46:58 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.
Dec  3 01:46:58 compute-0 podman[390725]: 2025-12-03 01:46:58.347302057 +0000 UTC m=+0.137193476 container init c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., release-0.7.12=)
Dec  3 01:46:58 compute-0 kepler[390740]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 01:46:58 compute-0 podman[390725]: 2025-12-03 01:46:58.394598766 +0000 UTC m=+0.184490155 container start c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, architecture=x86_64, io.buildah.version=1.29.0)
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.397210       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.397432       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.397457       1 config.go:295] kernel version: 5.14
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.398303       1 power.go:78] Unable to obtain power, use estimate method
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.398346       1 redfish.go:169] failed to get redfish credential file path
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.399007       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.399044       1 power.go:79] using none to obtain power
Dec  3 01:46:58 compute-0 kepler[390740]: E1203 01:46:58.399068       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  3 01:46:58 compute-0 kepler[390740]: E1203 01:46:58.399100       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  3 01:46:58 compute-0 podman[390725]: kepler
Dec  3 01:46:58 compute-0 kepler[390740]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  3 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.402079       1 exporter.go:84] Number of CPUs: 8
Dec  3 01:46:58 compute-0 systemd[1]: Started kepler container.
Dec  3 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:46:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:46:58 compute-0 podman[390750]: 2025-12-03 01:46:58.48480635 +0000 UTC m=+0.083135893 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, version=9.4, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Dec  3 01:46:58 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-dbf87a852cde7e.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 01:46:58 compute-0 systemd[1]: c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6-dbf87a852cde7e.service: Failed with result 'exit-code'.
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.925361       1 watcher.go:83] Using in cluster k8s config
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.925394       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  3 01:46:58 compute-0 kepler[390740]: E1203 01:46:58.925437       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.932567       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.932593       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.938896       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.939181       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.951500       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.951637       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.951664       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971481       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971605       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971615       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971625       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971636       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971659       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971808       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.971895       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.972178       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.972321       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.972472       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  3 01:46:58 compute-0 kepler[390740]: I1203 01:46:58.974131       1 exporter.go:208] Started Kepler in 577.281941ms
Dec  3 01:46:59 compute-0 python3.9[390933]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 01:46:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:46:59.607 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:46:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:46:59.609 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:46:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:46:59.609 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:46:59 compute-0 podman[158098]: time="2025-12-03T01:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:46:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:46:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42581 "" "Go-http-client/1.1"
Dec  3 01:46:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8108 "" "Go-http-client/1.1"
Dec  3 01:47:01 compute-0 python3.9[391087]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  3 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:47:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:47:01 compute-0 openstack_network_exporter[368278]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:47:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:47:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:01 compute-0 podman[391169]: 2025-12-03 01:47:01.951725232 +0000 UTC m=+0.201384937 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 01:47:02 compute-0 python3.9[391274]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:02 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec  3 01:47:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:02 compute-0 podman[391275]: 2025-12-03 01:47:02.808618862 +0000 UTC m=+0.167322065 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 01:47:02 compute-0 podman[391275]: 2025-12-03 01:47:02.846009829 +0000 UTC m=+0.204712982 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 01:47:02 compute-0 podman[391287]: 2025-12-03 01:47:02.896187111 +0000 UTC m=+0.148473248 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:47:02 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec  3 01:47:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:03 compute-0 python3.9[391475]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:04 compute-0 systemd[1]: Started libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope.
Dec  3 01:47:04 compute-0 podman[391476]: 2025-12-03 01:47:04.096645683 +0000 UTC m=+0.131601056 container exec 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  3 01:47:04 compute-0 podman[391476]: 2025-12-03 01:47:04.108341556 +0000 UTC m=+0.143296999 container exec_died 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:47:04 compute-0 systemd[1]: libpod-conmon-926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f.scope: Deactivated successfully.
Dec  3 01:47:04 compute-0 podman[391491]: 2025-12-03 01:47:04.252826409 +0000 UTC m=+0.142378083 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  3 01:47:05 compute-0 python3.9[391677]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:06 compute-0 python3.9[391829]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  3 01:47:06 compute-0 podman[391839]: 2025-12-03 01:47:06.873858616 +0000 UTC m=+0.117385651 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:47:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:08 compute-0 python3.9[392016]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:08 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec  3 01:47:08 compute-0 podman[392017]: 2025-12-03 01:47:08.592096701 +0000 UTC m=+0.163290130 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:47:08 compute-0 podman[392017]: 2025-12-03 01:47:08.628117959 +0000 UTC m=+0.199311328 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:47:08 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec  3 01:47:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:10 compute-0 python3.9[392200]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:11 compute-0 systemd[1]: Started libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope.
Dec  3 01:47:11 compute-0 podman[392201]: 2025-12-03 01:47:11.090722244 +0000 UTC m=+0.174369636 container exec 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:47:11 compute-0 podman[392201]: 2025-12-03 01:47:11.126259838 +0000 UTC m=+0.209907150 container exec_died 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:47:11 compute-0 systemd[1]: libpod-conmon-7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264.scope: Deactivated successfully.
Dec  3 01:47:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:12 compute-0 python3.9[392382]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:13 compute-0 python3.9[392534]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  3 01:47:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:14 compute-0 python3.9[392697]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:14 compute-0 systemd[1]: Started libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope.
Dec  3 01:47:15 compute-0 podman[392698]: 2025-12-03 01:47:15.008251822 +0000 UTC m=+0.158613127 container exec 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:47:15 compute-0 podman[392698]: 2025-12-03 01:47:15.042646404 +0000 UTC m=+0.193007729 container exec_died 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:47:15 compute-0 systemd[1]: libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec  3 01:47:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:16 compute-0 python3.9[392878]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:16 compute-0 systemd[1]: Started libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope.
Dec  3 01:47:16 compute-0 podman[392879]: 2025-12-03 01:47:16.409021191 +0000 UTC m=+0.161402947 container exec 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:47:16 compute-0 podman[392879]: 2025-12-03 01:47:16.445039188 +0000 UTC m=+0.197420884 container exec_died 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:47:16 compute-0 systemd[1]: libpod-conmon-9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df.scope: Deactivated successfully.
Dec  3 01:47:17 compute-0 python3.9[393061]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:18 compute-0 python3.9[393215]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.500 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.501 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.501 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.502 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.515 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:47:19.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:47:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:20 compute-0 python3.9[393380]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:20 compute-0 systemd[1]: Started libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope.
Dec  3 01:47:20 compute-0 podman[393381]: 2025-12-03 01:47:20.238635441 +0000 UTC m=+0.129742463 container exec 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:47:20 compute-0 podman[393381]: 2025-12-03 01:47:20.274216406 +0000 UTC m=+0.165323368 container exec_died 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:47:20 compute-0 systemd[1]: libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec  3 01:47:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:21 compute-0 podman[393532]: 2025-12-03 01:47:21.882926136 +0000 UTC m=+0.137335388 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:47:22 compute-0 podman[393583]: 2025-12-03 01:47:22.191847611 +0000 UTC m=+0.118188564 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:47:22 compute-0 podman[393582]: 2025-12-03 01:47:22.203186474 +0000 UTC m=+0.133936872 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Dec  3 01:47:22 compute-0 python3.9[393584]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:22 compute-0 systemd[1]: Started libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope.
Dec  3 01:47:22 compute-0 podman[393620]: 2025-12-03 01:47:22.456677127 +0000 UTC m=+0.148533719 container exec 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:47:22 compute-0 podman[393620]: 2025-12-03 01:47:22.492268313 +0000 UTC m=+0.184124925 container exec_died 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:47:22 compute-0 systemd[1]: libpod-conmon-82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195.scope: Deactivated successfully.
Dec  3 01:47:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:24 compute-0 python3.9[393800]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:26 compute-0 python3.9[393952]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  3 01:47:26 compute-0 podman[394047]: 2025-12-03 01:47:26.871929437 +0000 UTC m=+0.116097244 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:47:26 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 01:47:26 compute-0 systemd[1]: ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92-462cb216658e0ca0.service: Failed with result 'exit-code'.
Dec  3 01:47:27 compute-0 python3.9[394136]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:27 compute-0 systemd[1]: Started libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope.
Dec  3 01:47:27 compute-0 podman[394137]: 2025-12-03 01:47:27.504230409 +0000 UTC m=+0.156894758 container exec 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec  3 01:47:27 compute-0 podman[394137]: 2025-12-03 01:47:27.539761072 +0000 UTC m=+0.192425441 container exec_died 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec  3 01:47:27 compute-0 systemd[1]: libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec  3 01:47:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:47:28
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:47:28 compute-0 python3.9[394319]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:47:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:47:28 compute-0 systemd[1]: Started libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope.
Dec  3 01:47:28 compute-0 podman[394320]: 2025-12-03 01:47:28.878019317 +0000 UTC m=+0.169717063 container exec 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 01:47:28 compute-0 podman[394326]: 2025-12-03 01:47:28.90649781 +0000 UTC m=+0.162409735 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, name=ubi9)
Dec  3 01:47:28 compute-0 podman[394320]: 2025-12-03 01:47:28.911852693 +0000 UTC m=+0.203550359 container exec_died 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public)
Dec  3 01:47:28 compute-0 systemd[1]: libpod-conmon-945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b.scope: Deactivated successfully.
Dec  3 01:47:29 compute-0 podman[158098]: time="2025-12-03T01:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:47:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  3 01:47:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8098 "" "Go-http-client/1.1"
Dec  3 01:47:30 compute-0 python3.9[394520]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:31 compute-0 python3.9[394674]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  3 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:47:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:47:31 compute-0 openstack_network_exporter[368278]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:47:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:47:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:32 compute-0 podman[394811]: 2025-12-03 01:47:32.53423227 +0000 UTC m=+0.216772776 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  3 01:47:32 compute-0 python3.9[394857]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:32 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec  3 01:47:32 compute-0 podman[394864]: 2025-12-03 01:47:32.85704635 +0000 UTC m=+0.163317091 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:47:32 compute-0 podman[394864]: 2025-12-03 01:47:32.893967324 +0000 UTC m=+0.200238035 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:47:32 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec  3 01:47:33 compute-0 podman[394892]: 2025-12-03 01:47:33.14828132 +0000 UTC m=+0.135528808 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec  3 01:47:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:34 compute-0 python3.9[395062]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:34 compute-0 systemd[1]: Started libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope.
Dec  3 01:47:34 compute-0 podman[395063]: 2025-12-03 01:47:34.296315727 +0000 UTC m=+0.146544842 container exec ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:47:34 compute-0 podman[395063]: 2025-12-03 01:47:34.33182582 +0000 UTC m=+0.182054925 container exec_died ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:47:34 compute-0 systemd[1]: libpod-conmon-ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92.scope: Deactivated successfully.
Dec  3 01:47:34 compute-0 podman[395093]: 2025-12-03 01:47:34.602780302 +0000 UTC m=+0.154391707 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec  3 01:47:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:36 compute-0 python3.9[395265]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:37 compute-0 podman[395391]: 2025-12-03 01:47:37.325852609 +0000 UTC m=+0.165813672 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:47:37 compute-0 python3.9[395441]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  3 01:47:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:47:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:47:39 compute-0 python3.9[395607]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:39 compute-0 systemd[1]: Started libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope.
Dec  3 01:47:39 compute-0 podman[395608]: 2025-12-03 01:47:39.568390895 +0000 UTC m=+0.196344743 container exec c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64)
Dec  3 01:47:39 compute-0 podman[395608]: 2025-12-03 01:47:39.605318049 +0000 UTC m=+0.233271897 container exec_died c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, config_id=edpm, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, container_name=kepler)
Dec  3 01:47:39 compute-0 systemd[1]: libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec  3 01:47:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:47:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4595 writes, 20K keys, 4595 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4595 writes, 4595 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1286 writes, 5584 keys, 1286 commit groups, 1.0 writes per commit group, ingest: 8.45 MB, 0.01 MB/s#012Interval WAL: 1286 writes, 1286 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.9      0.24              0.11        11    0.022       0      0       0.0       0.0#012  L6      1/0    6.77 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    124.4    102.1      0.68              0.33        10    0.068     42K   5258       0.0       0.0#012 Sum      1/0    6.77 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     92.1     99.5      0.92              0.43        21    0.044     42K   5258       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     96.7     96.8      0.37              0.15         8    0.046     18K   2057       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    124.4    102.1      0.68              0.33        10    0.068     42K   5258       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     92.7      0.24              0.11        10    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.9 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 6.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(418,6.00 MB,1.94772%) FilterBlock(22,127.17 KB,0.0403218%) IndexBlock(22,238.08 KB,0.0754864%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:47:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 118534c7-b330-4167-b02a-b4580bf46380 does not exist
Dec  3 01:47:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4b92bc31-cff7-475d-9bde-6eb5517db1be does not exist
Dec  3 01:47:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 93ec1a13-eccc-4be5-9055-228be06da025 does not exist
Dec  3 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:47:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:47:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:47:40 compute-0 python3.9[395918]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:40 compute-0 systemd[1]: Started libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope.
Dec  3 01:47:41 compute-0 podman[395944]: 2025-12-03 01:47:41.009615877 +0000 UTC m=+0.153881572 container exec c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release-0.7.12=, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm)
Dec  3 01:47:41 compute-0 podman[395944]: 2025-12-03 01:47:41.062168257 +0000 UTC m=+0.206433942 container exec_died c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, config_id=edpm, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:47:41 compute-0 systemd[1]: libpod-conmon-c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6.scope: Deactivated successfully.
Dec  3 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:47:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.758040322 +0000 UTC m=+0.081000882 container create 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:47:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:41 compute-0 systemd[1]: Started libpod-conmon-25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0.scope.
Dec  3 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.727578003 +0000 UTC m=+0.050538563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:47:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.866903188 +0000 UTC m=+0.189863818 container init 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.878108588 +0000 UTC m=+0.201069148 container start 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.882983967 +0000 UTC m=+0.205944527 container attach 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:47:41 compute-0 clever_mcclintock[396228]: 167 167
Dec  3 01:47:41 compute-0 systemd[1]: libpod-25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0.scope: Deactivated successfully.
Dec  3 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.887472745 +0000 UTC m=+0.210433305 container died 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a6016610766394af1f833362e11c9f76359db8ff31b84a0429bfb6b23703c20-merged.mount: Deactivated successfully.
Dec  3 01:47:41 compute-0 podman[396188]: 2025-12-03 01:47:41.94334931 +0000 UTC m=+0.266309870 container remove 25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:47:41 compute-0 systemd[1]: libpod-conmon-25d69ccdda8ba2d755c976af4f3ff8b6ac66274da66198559e7ecfc63f085ec0.scope: Deactivated successfully.
Dec  3 01:47:42 compute-0 python3.9[396270]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.239596012 +0000 UTC m=+0.116408742 container create ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.204301515 +0000 UTC m=+0.081114315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:47:42 compute-0 systemd[1]: Started libpod-conmon-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope.
Dec  3 01:47:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.412940589 +0000 UTC m=+0.289753339 container init ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.435663297 +0000 UTC m=+0.312476027 container start ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:47:42 compute-0 podman[396281]: 2025-12-03 01:47:42.441672108 +0000 UTC m=+0.318484928 container attach ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:47:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:43 compute-0 python3.9[396456]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  3 01:47:43 compute-0 sad_allen[396321]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:47:43 compute-0 sad_allen[396321]: --> relative data size: 1.0
Dec  3 01:47:43 compute-0 sad_allen[396321]: --> All data devices are unavailable
Dec  3 01:47:43 compute-0 systemd[1]: libpod-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope: Deactivated successfully.
Dec  3 01:47:43 compute-0 systemd[1]: libpod-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope: Consumed 1.248s CPU time.
Dec  3 01:47:43 compute-0 podman[396281]: 2025-12-03 01:47:43.768115495 +0000 UTC m=+1.644928245 container died ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:47:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5b14389536b199d6c630ecd520700bce4d33867ca18de69b9520ac4c3484db4-merged.mount: Deactivated successfully.
Dec  3 01:47:43 compute-0 podman[396281]: 2025-12-03 01:47:43.86781618 +0000 UTC m=+1.744628900 container remove ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_allen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:47:43 compute-0 systemd[1]: libpod-conmon-ca9ab3b0bb3ca0e0b689ea2c937ee6b2319290b290a4ae454a8de15aaddf3e70.scope: Deactivated successfully.
Dec  3 01:47:44 compute-0 python3.9[396734]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:44 compute-0 systemd[1]: Started libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope.
Dec  3 01:47:44 compute-0 podman[396755]: 2025-12-03 01:47:44.681880058 +0000 UTC m=+0.130796163 container exec 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 01:47:44 compute-0 podman[396755]: 2025-12-03 01:47:44.716258779 +0000 UTC m=+0.165174854 container exec_died 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:47:44 compute-0 systemd[1]: libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope: Deactivated successfully.
Dec  3 01:47:44 compute-0 podman[396831]: 2025-12-03 01:47:44.936342438 +0000 UTC m=+0.074881717 container create aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:47:44 compute-0 podman[396831]: 2025-12-03 01:47:44.90311591 +0000 UTC m=+0.041655169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:47:45 compute-0 systemd[1]: Started libpod-conmon-aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd.scope.
Dec  3 01:47:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.089448957 +0000 UTC m=+0.227988316 container init aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.10147663 +0000 UTC m=+0.240015909 container start aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:47:45 compute-0 competent_kepler[396859]: 167 167
Dec  3 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.108655885 +0000 UTC m=+0.247195164 container attach aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:47:45 compute-0 systemd[1]: libpod-aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd.scope: Deactivated successfully.
Dec  3 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.110098046 +0000 UTC m=+0.248637325 container died aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-44c02aa9353c722f044076831909d5a85bc9a8b3d80f4a190fbb2fc13c1f9684-merged.mount: Deactivated successfully.
Dec  3 01:47:45 compute-0 podman[396831]: 2025-12-03 01:47:45.185024624 +0000 UTC m=+0.323563883 container remove aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:47:45 compute-0 systemd[1]: libpod-conmon-aac42895e19d60845870e8292de4e28a30cb9b166b2c700be9756ee3528a7dcd.scope: Deactivated successfully.
Dec  3 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.434235545 +0000 UTC m=+0.080446737 container create 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.407200743 +0000 UTC m=+0.053411945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:47:45 compute-0 systemd[1]: Started libpod-conmon-6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e.scope.
Dec  3 01:47:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.611519153 +0000 UTC m=+0.257730345 container init 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.623460104 +0000 UTC m=+0.269671296 container start 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:47:45 compute-0 podman[396952]: 2025-12-03 01:47:45.629598159 +0000 UTC m=+0.275809331 container attach 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:47:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:45 compute-0 python3.9[397030]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:46 compute-0 systemd[1]: Started libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope.
Dec  3 01:47:46 compute-0 podman[397031]: 2025-12-03 01:47:46.152069687 +0000 UTC m=+0.157218327 container exec 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:47:46 compute-0 podman[397031]: 2025-12-03 01:47:46.188668971 +0000 UTC m=+0.193817631 container exec_died 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 01:47:46 compute-0 systemd[1]: libpod-conmon-5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6.scope: Deactivated successfully.
Dec  3 01:47:46 compute-0 reverent_cohen[396994]: {
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:    "0": [
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:        {
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "devices": [
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "/dev/loop3"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            ],
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_name": "ceph_lv0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_size": "21470642176",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "name": "ceph_lv0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "tags": {
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cluster_name": "ceph",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.crush_device_class": "",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.encrypted": "0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osd_id": "0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.type": "block",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.vdo": "0"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            },
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "type": "block",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "vg_name": "ceph_vg0"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:        }
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:    ],
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:    "1": [
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:        {
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "devices": [
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "/dev/loop4"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            ],
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_name": "ceph_lv1",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_size": "21470642176",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "name": "ceph_lv1",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "tags": {
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cluster_name": "ceph",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.crush_device_class": "",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.encrypted": "0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osd_id": "1",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.type": "block",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.vdo": "0"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            },
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "type": "block",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "vg_name": "ceph_vg1"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:        }
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:    ],
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:    "2": [
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:        {
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "devices": [
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "/dev/loop5"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            ],
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_name": "ceph_lv2",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_size": "21470642176",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "name": "ceph_lv2",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "tags": {
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.cluster_name": "ceph",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.crush_device_class": "",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.encrypted": "0",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osd_id": "2",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.type": "block",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:                "ceph.vdo": "0"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            },
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "type": "block",
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:            "vg_name": "ceph_vg2"
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:        }
Dec  3 01:47:46 compute-0 reverent_cohen[396994]:    ]
Dec  3 01:47:46 compute-0 reverent_cohen[396994]: }
Dec  3 01:47:46 compute-0 systemd[1]: libpod-6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e.scope: Deactivated successfully.
Dec  3 01:47:46 compute-0 podman[396952]: 2025-12-03 01:47:46.490885374 +0000 UTC m=+1.137096596 container died 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-aea6b26204defa9a5c288b3a95749d83e614f9f4ea19d62f7086567021b94398-merged.mount: Deactivated successfully.
Dec  3 01:47:46 compute-0 podman[396952]: 2025-12-03 01:47:46.59938571 +0000 UTC m=+1.245596882 container remove 6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:47:46 compute-0 systemd[1]: libpod-conmon-6b96fb705c206d395977ce0b9d0b2744e5ebaeb9fd919fd9808f3191f85d664e.scope: Deactivated successfully.
Dec  3 01:47:47 compute-0 python3.9[397330]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.808839079 +0000 UTC m=+0.096601278 container create 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:47:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.780752877 +0000 UTC m=+0.068515076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:47:47 compute-0 systemd[1]: Started libpod-conmon-706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603.scope.
Dec  3 01:47:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.937763067 +0000 UTC m=+0.225525266 container init 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.954473824 +0000 UTC m=+0.242236023 container start 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:47:47 compute-0 compassionate_beaver[397455]: 167 167
Dec  3 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.961991759 +0000 UTC m=+0.249753958 container attach 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:47:47 compute-0 systemd[1]: libpod-706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603.scope: Deactivated successfully.
Dec  3 01:47:47 compute-0 podman[397396]: 2025-12-03 01:47:47.964361446 +0000 UTC m=+0.252123645 container died 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:47:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-840a963c9e0b0dee208bd8f8b3da28f576cd6a2e82eb31492f92f00e36b0b72a-merged.mount: Deactivated successfully.
Dec  3 01:47:48 compute-0 podman[397396]: 2025-12-03 01:47:48.047126558 +0000 UTC m=+0.334888727 container remove 706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_beaver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:47:48 compute-0 systemd[1]: libpod-conmon-706836c2e0a5133edf145af45ff3d5bab0e54824bc96f1b1787696f7836a4603.scope: Deactivated successfully.
Dec  3 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.30863848 +0000 UTC m=+0.084563884 container create 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.27568882 +0000 UTC m=+0.051614294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:47:48 compute-0 systemd[1]: Started libpod-conmon-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope.
Dec  3 01:47:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.452722691 +0000 UTC m=+0.228648125 container init 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.467282316 +0000 UTC m=+0.243207700 container start 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:47:48 compute-0 podman[397528]: 2025-12-03 01:47:48.476967593 +0000 UTC m=+0.252893027 container attach 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.597 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:48 compute-0 python3.9[397580]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  3 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.628 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.629 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.631 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:47:48 compute-0 nova_compute[351485]: 2025-12-03 01:47:48.632 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:47:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:47:49 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118109666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.192 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]: {
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "osd_id": 2,
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "type": "bluestore"
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:    },
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "osd_id": 1,
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "type": "bluestore"
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:    },
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "osd_id": 0,
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:        "type": "bluestore"
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]:    }
Dec  3 01:47:49 compute-0 charming_chaplygin[397576]: }
Dec  3 01:47:49 compute-0 systemd[1]: libpod-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope: Deactivated successfully.
Dec  3 01:47:49 compute-0 systemd[1]: libpod-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope: Consumed 1.091s CPU time.
Dec  3 01:47:49 compute-0 podman[397769]: 2025-12-03 01:47:49.646451612 +0000 UTC m=+0.047721943 container died 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.654 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.656 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4449MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.656 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.657 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-64281b75abe53d5773e45ce3e9eabfd1df3259d26515dc97a5cd6e5b9d4c734f-merged.mount: Deactivated successfully.
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.737 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.738 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:47:49 compute-0 podman[397769]: 2025-12-03 01:47:49.745128407 +0000 UTC m=+0.146398748 container remove 9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:47:49 compute-0 systemd[1]: libpod-conmon-9140ce1efe0611952b0033419a64df7f6e0146afdbd5643e465a35fd8b648422.scope: Deactivated successfully.
Dec  3 01:47:49 compute-0 nova_compute[351485]: 2025-12-03 01:47:49.767 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:47:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:47:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:47:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:47:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:47:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9ea9fa61-9bf2-4b3a-8930-7d1d102fe5c3 does not exist
Dec  3 01:47:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b0a9ad1b-42f0-4567-a08c-3a9b033683bc does not exist
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.832685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469832720, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1426, "num_deletes": 251, "total_data_size": 2286148, "memory_usage": 2336784, "flush_reason": "Manual Compaction"}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469853907, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2243299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19461, "largest_seqno": 20886, "table_properties": {"data_size": 2236567, "index_size": 3867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13734, "raw_average_key_size": 19, "raw_value_size": 2223184, "raw_average_value_size": 3198, "num_data_blocks": 177, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726317, "oldest_key_time": 1764726317, "file_creation_time": 1764726469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 21320 microseconds, and 7500 cpu microseconds.
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.853987) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2243299 bytes OK
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.854028) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.855934) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.855952) EVENT_LOG_v1 {"time_micros": 1764726469855946, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.855973) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2279851, prev total WAL file size 2279851, number of live WAL files 2.
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.857207) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2190KB)], [47(6931KB)]
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469857258, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9341294, "oldest_snapshot_seqno": -1}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4274 keys, 7564443 bytes, temperature: kUnknown
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469914951, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7564443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7534820, "index_size": 17865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 105687, "raw_average_key_size": 24, "raw_value_size": 7456290, "raw_average_value_size": 1744, "num_data_blocks": 751, "num_entries": 4274, "num_filter_entries": 4274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.915132) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7564443 bytes
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.917043) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.8 rd, 131.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.4) OK, records in: 4788, records dropped: 514 output_compression: NoCompression
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.917062) EVENT_LOG_v1 {"time_micros": 1764726469917053, "job": 24, "event": "compaction_finished", "compaction_time_micros": 57739, "compaction_time_cpu_micros": 31962, "output_level": 6, "num_output_files": 1, "total_output_size": 7564443, "num_input_records": 4788, "num_output_records": 4274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469917615, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726469919044, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.857045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:47:49 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:47:49.919271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:47:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:47:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672027519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.275 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.287 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.307 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.308 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:47:50 compute-0 nova_compute[351485]: 2025-12-03 01:47:50.308 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:47:50 compute-0 python3.9[397883]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:47:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:47:50 compute-0 systemd[1]: Started libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope.
Dec  3 01:47:50 compute-0 podman[397884]: 2025-12-03 01:47:50.895208072 +0000 UTC m=+0.165938226 container exec df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:47:50 compute-0 podman[397884]: 2025-12-03 01:47:50.930282833 +0000 UTC m=+0.201012987 container exec_died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd)
Dec  3 01:47:51 compute-0 systemd[1]: libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec  3 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.289 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.289 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.290 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.311 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.312 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.314 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:51 compute-0 nova_compute[351485]: 2025-12-03 01:47:51.314 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:52 compute-0 nova_compute[351485]: 2025-12-03 01:47:52.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:47:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:52 compute-0 podman[398019]: 2025-12-03 01:47:52.868748524 +0000 UTC m=+0.108654301 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm)
Dec  3 01:47:52 compute-0 podman[398015]: 2025-12-03 01:47:52.873435348 +0000 UTC m=+0.113077618 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 01:47:52 compute-0 podman[398021]: 2025-12-03 01:47:52.902731593 +0000 UTC m=+0.140365256 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:47:53 compute-0 python3.9[398123]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  3 01:47:53 compute-0 systemd[1]: Started libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope.
Dec  3 01:47:53 compute-0 podman[398124]: 2025-12-03 01:47:53.325234819 +0000 UTC m=+0.153883772 container exec df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:47:53 compute-0 podman[398124]: 2025-12-03 01:47:53.360719781 +0000 UTC m=+0.189368784 container exec_died df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec  3 01:47:53 compute-0 systemd[1]: libpod-conmon-df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630.scope: Deactivated successfully.
Dec  3 01:47:53 compute-0 nova_compute[351485]: 2025-12-03 01:47:53.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:47:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:54 compute-0 python3.9[398306]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:55 compute-0 python3.9[398458]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:56 compute-0 python3.9[398610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:47:57 compute-0 podman[398660]: 2025-12-03 01:47:57.265169617 +0000 UTC m=+0.088299990 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:47:57 compute-0 python3.9[398708]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/kepler.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/kepler.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:47:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:47:58 compute-0 python3.9[398860]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:47:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:47:59.609 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:47:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:47:59.611 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:47:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:47:59.611 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:47:59 compute-0 podman[158098]: time="2025-12-03T01:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:47:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:47:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8109 "" "Go-http-client/1.1"
Dec  3 01:47:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:47:59 compute-0 podman[398972]: 2025-12-03 01:47:59.89308591 +0000 UTC m=+0.137481034 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, managed_by=edpm_ansible, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container)
Dec  3 01:48:00 compute-0 python3.9[399032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:00 compute-0 python3.9[399110]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:48:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:48:01 compute-0 openstack_network_exporter[368278]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:48:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:48:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:02 compute-0 python3.9[399262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:02 compute-0 python3.9[399340]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2qdhz08x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:02 compute-0 podman[399341]: 2025-12-03 01:48:02.93721613 +0000 UTC m=+0.189572040 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:48:03 compute-0 podman[399489]: 2025-12-03 01:48:03.704617636 +0000 UTC m=+0.158130853 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 01:48:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:03 compute-0 python3.9[399534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:04 compute-0 python3.9[399613]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:04 compute-0 podman[399619]: 2025-12-03 01:48:04.860806535 +0000 UTC m=+0.121793865 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 01:48:05 compute-0 python3.9[399785]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:48:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:07 compute-0 python3[399938]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 01:48:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:07 compute-0 podman[400038]: 2025-12-03 01:48:07.862242275 +0000 UTC m=+0.108469546 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:48:08 compute-0 python3.9[400112]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:08 compute-0 python3.9[400190]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:10 compute-0 python3.9[400342]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:11 compute-0 python3.9[400420]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:12 compute-0 python3.9[400572]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:12 compute-0 python3.9[400650]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:14 compute-0 python3.9[400802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:14 compute-0 python3.9[400880]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:15 compute-0 python3.9[401032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:16 compute-0 python3.9[401110]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:17 compute-0 python3.9[401262]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:48:19 compute-0 python3.9[401417]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:20 compute-0 python3.9[401570]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:48:21 compute-0 python3.9[401723]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 01:48:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:22 compute-0 python3.9[401877]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:23 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Dec  3 01:48:23 compute-0 systemd[1]: session-58.scope: Consumed 2min 9.646s CPU time.
Dec  3 01:48:23 compute-0 systemd-logind[800]: Session 58 logged out. Waiting for processes to exit.
Dec  3 01:48:23 compute-0 systemd-logind[800]: Removed session 58.
Dec  3 01:48:23 compute-0 podman[401904]: 2025-12-03 01:48:23.427294762 +0000 UTC m=+0.110578876 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:48:23 compute-0 podman[401902]: 2025-12-03 01:48:23.426478529 +0000 UTC m=+0.120746777 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:48:23 compute-0 podman[401903]: 2025-12-03 01:48:23.495125877 +0000 UTC m=+0.181146219 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 01:48:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:27 compute-0 podman[401960]: 2025-12-03 01:48:27.921446924 +0000 UTC m=+0.163162236 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:48:28
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images']
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:48:28 compute-0 systemd-logind[800]: New session 59 of user zuul.
Dec  3 01:48:28 compute-0 systemd[1]: Started Session 59 of User zuul.
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:48:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:48:29 compute-0 ceph-mgr[193109]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1922561230
Dec  3 01:48:29 compute-0 podman[158098]: time="2025-12-03T01:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:48:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:48:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8107 "" "Go-http-client/1.1"
Dec  3 01:48:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:30 compute-0 python3.9[402133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:48:30 compute-0 podman[402162]: 2025-12-03 01:48:30.887496614 +0000 UTC m=+0.135503417 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1214.1726694543, vcs-type=git, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, version=9.4)
Dec  3 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:48:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:48:31 compute-0 openstack_network_exporter[368278]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:48:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:48:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:32 compute-0 python3.9[402311]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  3 01:48:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:33 compute-0 podman[402390]: 2025-12-03 01:48:33.892124556 +0000 UTC m=+0.135415005 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 01:48:33 compute-0 podman[402389]: 2025-12-03 01:48:33.923925153 +0000 UTC m=+0.174785618 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 01:48:34 compute-0 python3.9[402509]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 01:48:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:35 compute-0 podman[402541]: 2025-12-03 01:48:35.833302155 +0000 UTC m=+0.095371113 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 01:48:36 compute-0 python3.9[402614]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 01:48:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:48:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:48:38 compute-0 podman[402739]: 2025-12-03 01:48:38.610077524 +0000 UTC m=+0.126090909 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:48:38 compute-0 python3.9[402790]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:39 compute-0 python3.9[402870]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/pki/rsyslog/ca-openshift.crt _original_basename=ca-openshift.crt recurse=False state=file path=/etc/pki/rsyslog/ca-openshift.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:40 compute-0 python3.9[403022]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:41 compute-0 python3.9[403178]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 01:48:42 compute-0 python3.9[403256]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/rsyslog.d/10-telemetry.conf _original_basename=10-telemetry.conf recurse=False state=file path=/etc/rsyslog.d/10-telemetry.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 01:48:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:43 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Dec  3 01:48:43 compute-0 systemd[1]: session-59.scope: Consumed 10.661s CPU time.
Dec  3 01:48:43 compute-0 systemd-logind[800]: Session 59 logged out. Waiting for processes to exit.
Dec  3 01:48:43 compute-0 systemd-logind[800]: Removed session 59.
Dec  3 01:48:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:49 compute-0 nova_compute[351485]: 2025-12-03 01:48:49.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.638 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.639 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.639 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:48:50 compute-0 nova_compute[351485]: 2025-12-03 01:48:50.640 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:48:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:48:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934064838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.137 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.640 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.641 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4559MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.642 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.642 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.749 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.750 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:48:51 compute-0 nova_compute[351485]: 2025-12-03 01:48:51.774 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:48:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:48:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9d9156a8-629b-499f-9f57-8c28607cbb17 does not exist
Dec  3 01:48:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6ff8367a-a5ce-4552-92d6-ce594838e560 does not exist
Dec  3 01:48:52 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b244ce6f-0e56-4aa2-97c6-b7db03292ac8 does not exist
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:48:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588088429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.327 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.339 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.357 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.360 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:48:52 compute-0 nova_compute[351485]: 2025-12-03 01:48:52.360 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:48:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:48:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.330042379 +0000 UTC m=+0.081329432 container create b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.362 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.363 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.363 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.28102291 +0000 UTC m=+0.032309953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:48:53 compute-0 systemd[1]: Started libpod-conmon-b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4.scope.
Dec  3 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.403 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.404 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.476822227 +0000 UTC m=+0.228109320 container init b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.494374578 +0000 UTC m=+0.245661641 container start b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:48:53 compute-0 funny_sanderson[403729]: 167 167
Dec  3 01:48:53 compute-0 systemd[1]: libpod-b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4.scope: Deactivated successfully.
Dec  3 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.503386485 +0000 UTC m=+0.254673578 container attach b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.513639447 +0000 UTC m=+0.264926510 container died b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-22fd9b579a33b26c7a38174fa02559e37ae4322fbfbefe549a0b5688da0e8df2-merged.mount: Deactivated successfully.
Dec  3 01:48:53 compute-0 nova_compute[351485]: 2025-12-03 01:48:53.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:53 compute-0 podman[403713]: 2025-12-03 01:48:53.595464122 +0000 UTC m=+0.346751155 container remove b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:48:53 compute-0 podman[403732]: 2025-12-03 01:48:53.6122156 +0000 UTC m=+0.133256853 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:48:53 compute-0 systemd[1]: libpod-conmon-b1535b6d473e74f6e97d1ccc4cf82a6ab8606da8db0b166b96d70f1e248c68e4.scope: Deactivated successfully.
Dec  3 01:48:53 compute-0 podman[403733]: 2025-12-03 01:48:53.657359418 +0000 UTC m=+0.172014449 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 01:48:53 compute-0 podman[403763]: 2025-12-03 01:48:53.690958777 +0000 UTC m=+0.109824725 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:48:53 compute-0 podman[403813]: 2025-12-03 01:48:53.821493701 +0000 UTC m=+0.081487676 container create 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 01:48:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:53 compute-0 podman[403813]: 2025-12-03 01:48:53.783081315 +0000 UTC m=+0.043075360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:48:53 compute-0 systemd[1]: Started libpod-conmon-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope.
Dec  3 01:48:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:54 compute-0 podman[403813]: 2025-12-03 01:48:54.016176656 +0000 UTC m=+0.276170651 container init 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:48:54 compute-0 podman[403813]: 2025-12-03 01:48:54.034140479 +0000 UTC m=+0.294134464 container start 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:48:54 compute-0 podman[403813]: 2025-12-03 01:48:54.041478468 +0000 UTC m=+0.301472503 container attach 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 01:48:54 compute-0 nova_compute[351485]: 2025-12-03 01:48:54.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:54 compute-0 nova_compute[351485]: 2025-12-03 01:48:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:48:54 compute-0 nova_compute[351485]: 2025-12-03 01:48:54.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:48:55 compute-0 angry_montalcini[403829]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:48:55 compute-0 angry_montalcini[403829]: --> relative data size: 1.0
Dec  3 01:48:55 compute-0 angry_montalcini[403829]: --> All data devices are unavailable
Dec  3 01:48:55 compute-0 systemd[1]: libpod-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope: Deactivated successfully.
Dec  3 01:48:55 compute-0 podman[403813]: 2025-12-03 01:48:55.278454262 +0000 UTC m=+1.538448237 container died 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:48:55 compute-0 systemd[1]: libpod-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope: Consumed 1.183s CPU time.
Dec  3 01:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-510bfb4b9ab5ae36965995224078c7f0c86d15f9eb2613996e5851ea3e4c9530-merged.mount: Deactivated successfully.
Dec  3 01:48:55 compute-0 podman[403813]: 2025-12-03 01:48:55.38913393 +0000 UTC m=+1.649127885 container remove 6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_montalcini, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:48:55 compute-0 systemd[1]: libpod-conmon-6e3a80e314a27fbec153ac3311affd60046346cdeb724bfb95f54481bdfc19ef.scope: Deactivated successfully.
Dec  3 01:48:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.570857798 +0000 UTC m=+0.080086246 container create 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.538356161 +0000 UTC m=+0.047584639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:48:56 compute-0 systemd[1]: Started libpod-conmon-8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f.scope.
Dec  3 01:48:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.734962541 +0000 UTC m=+0.244191039 container init 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.752076049 +0000 UTC m=+0.261304457 container start 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.758636246 +0000 UTC m=+0.267864744 container attach 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:48:56 compute-0 nifty_golick[404029]: 167 167
Dec  3 01:48:56 compute-0 systemd[1]: libpod-8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f.scope: Deactivated successfully.
Dec  3 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.762486716 +0000 UTC m=+0.271715164 container died 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17bb8bcb11f988180464180060cfa5f3aaec78a28c94da768e5718cb6dbc06e-merged.mount: Deactivated successfully.
Dec  3 01:48:56 compute-0 podman[404012]: 2025-12-03 01:48:56.837678352 +0000 UTC m=+0.346906800 container remove 8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:48:56 compute-0 systemd[1]: libpod-conmon-8cd1e0b9897d8ca1ec91b822dcb5e6218a8913bfe9037fdeb110329b07df019f.scope: Deactivated successfully.
Dec  3 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.125212036 +0000 UTC m=+0.084412510 container create f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.097286139 +0000 UTC m=+0.056486613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:48:57 compute-0 systemd[1]: Started libpod-conmon-f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d.scope.
Dec  3 01:48:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.307989161 +0000 UTC m=+0.267189665 container init f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.33002158 +0000 UTC m=+0.289222054 container start f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:48:57 compute-0 podman[404053]: 2025-12-03 01:48:57.336892626 +0000 UTC m=+0.296093160 container attach f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:48:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:48:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:58 compute-0 elated_cannon[404068]: {
Dec  3 01:48:58 compute-0 elated_cannon[404068]:    "0": [
Dec  3 01:48:58 compute-0 elated_cannon[404068]:        {
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "devices": [
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "/dev/loop3"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            ],
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_name": "ceph_lv0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_size": "21470642176",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "name": "ceph_lv0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "tags": {
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cluster_name": "ceph",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.crush_device_class": "",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.encrypted": "0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osd_id": "0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.type": "block",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.vdo": "0"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            },
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "type": "block",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "vg_name": "ceph_vg0"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:        }
Dec  3 01:48:58 compute-0 elated_cannon[404068]:    ],
Dec  3 01:48:58 compute-0 elated_cannon[404068]:    "1": [
Dec  3 01:48:58 compute-0 elated_cannon[404068]:        {
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "devices": [
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "/dev/loop4"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            ],
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_name": "ceph_lv1",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_size": "21470642176",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "name": "ceph_lv1",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "tags": {
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cluster_name": "ceph",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.crush_device_class": "",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.encrypted": "0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osd_id": "1",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.type": "block",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.vdo": "0"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            },
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "type": "block",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "vg_name": "ceph_vg1"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:        }
Dec  3 01:48:58 compute-0 elated_cannon[404068]:    ],
Dec  3 01:48:58 compute-0 elated_cannon[404068]:    "2": [
Dec  3 01:48:58 compute-0 elated_cannon[404068]:        {
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "devices": [
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "/dev/loop5"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            ],
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_name": "ceph_lv2",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_size": "21470642176",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "name": "ceph_lv2",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "tags": {
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.cluster_name": "ceph",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.crush_device_class": "",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.encrypted": "0",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osd_id": "2",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.type": "block",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:                "ceph.vdo": "0"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            },
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "type": "block",
Dec  3 01:48:58 compute-0 elated_cannon[404068]:            "vg_name": "ceph_vg2"
Dec  3 01:48:58 compute-0 elated_cannon[404068]:        }
Dec  3 01:48:58 compute-0 elated_cannon[404068]:    ]
Dec  3 01:48:58 compute-0 elated_cannon[404068]: }
Dec  3 01:48:58 compute-0 systemd[1]: libpod-f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d.scope: Deactivated successfully.
Dec  3 01:48:58 compute-0 podman[404053]: 2025-12-03 01:48:58.259288085 +0000 UTC m=+1.218488559 container died f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 01:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0aa15f1c011e0dce0998c4c91d996043cb0f1eb3ae62e0c18e721a97ab539f-merged.mount: Deactivated successfully.
Dec  3 01:48:58 compute-0 podman[404053]: 2025-12-03 01:48:58.371186707 +0000 UTC m=+1.330387181 container remove f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 01:48:58 compute-0 systemd[1]: libpod-conmon-f83001378dada4658c18f2f0eef339e609bf8db584e300b2b3a4ed13d277278d.scope: Deactivated successfully.
Dec  3 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:48:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:48:58 compute-0 podman[404078]: 2025-12-03 01:48:58.452877127 +0000 UTC m=+0.137413150 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  3 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.60321984 +0000 UTC m=+0.096246007 container create fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:48:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:48:59.612 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:48:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:48:59.613 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:48:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:48:59.614 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.56642288 +0000 UTC m=+0.059449117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:48:59 compute-0 systemd[1]: Started libpod-conmon-fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518.scope.
Dec  3 01:48:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.735939397 +0000 UTC m=+0.228965614 container init fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:48:59 compute-0 podman[158098]: time="2025-12-03T01:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.760343834 +0000 UTC m=+0.253370011 container start fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.767071186 +0000 UTC m=+0.260097433 container attach fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:48:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43960 "" "Go-http-client/1.1"
Dec  3 01:48:59 compute-0 eloquent_moser[404261]: 167 167
Dec  3 01:48:59 compute-0 systemd[1]: libpod-fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518.scope: Deactivated successfully.
Dec  3 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.773487829 +0000 UTC m=+0.266514016 container died fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:48:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8115 "" "Go-http-client/1.1"
Dec  3 01:48:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cfaa1d5376d179e77f03961d16365a9f9a7c7ada4b0247d72e82bb3f8acc5b-merged.mount: Deactivated successfully.
Dec  3 01:48:59 compute-0 podman[404245]: 2025-12-03 01:48:59.865402541 +0000 UTC m=+0.358428688 container remove fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:48:59 compute-0 systemd[1]: libpod-conmon-fb83e37a445419218913b44b1447a80e25d752562b9a3026cd1447f3b7899518.scope: Deactivated successfully.
Dec  3 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.175375846 +0000 UTC m=+0.110714880 container create 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.133953104 +0000 UTC m=+0.069292198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:49:00 compute-0 systemd[1]: Started libpod-conmon-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope.
Dec  3 01:49:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.362238108 +0000 UTC m=+0.297577202 container init 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.382393723 +0000 UTC m=+0.317732737 container start 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:49:00 compute-0 podman[404287]: 2025-12-03 01:49:00.389797914 +0000 UTC m=+0.325137008 container attach 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:49:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:49:01 compute-0 openstack_network_exporter[368278]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:49:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:49:01 compute-0 confident_jennings[404303]: {
Dec  3 01:49:01 compute-0 confident_jennings[404303]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "osd_id": 2,
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "type": "bluestore"
Dec  3 01:49:01 compute-0 confident_jennings[404303]:    },
Dec  3 01:49:01 compute-0 confident_jennings[404303]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "osd_id": 1,
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "type": "bluestore"
Dec  3 01:49:01 compute-0 confident_jennings[404303]:    },
Dec  3 01:49:01 compute-0 confident_jennings[404303]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "osd_id": 0,
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:49:01 compute-0 confident_jennings[404303]:        "type": "bluestore"
Dec  3 01:49:01 compute-0 confident_jennings[404303]:    }
Dec  3 01:49:01 compute-0 confident_jennings[404303]: }
Dec  3 01:49:01 compute-0 systemd[1]: libpod-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope: Deactivated successfully.
Dec  3 01:49:01 compute-0 podman[404287]: 2025-12-03 01:49:01.593303224 +0000 UTC m=+1.528642268 container died 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:49:01 compute-0 systemd[1]: libpod-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope: Consumed 1.196s CPU time.
Dec  3 01:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d497c43e08b4fdb304340bd360ca0c8c8a5620b45ced7d1c3245207812a6af31-merged.mount: Deactivated successfully.
Dec  3 01:49:01 compute-0 podman[404287]: 2025-12-03 01:49:01.686921655 +0000 UTC m=+1.622260659 container remove 631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 01:49:01 compute-0 systemd[1]: libpod-conmon-631878f7f45f67fdb1baa19446c31e20f9c02ca402dd254edd5a201bde8d495c.scope: Deactivated successfully.
Dec  3 01:49:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:49:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:49:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:49:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:49:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5147d833-1c04-473d-ac4d-8b2b78090c3c does not exist
Dec  3 01:49:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cc2d6c82-0bc0-4148-a05a-50853d100791 does not exist
Dec  3 01:49:01 compute-0 podman[404337]: 2025-12-03 01:49:01.789258075 +0000 UTC m=+0.142078635 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec  3 01:49:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:02 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:49:02 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:49:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:04 compute-0 podman[404417]: 2025-12-03 01:49:04.86272937 +0000 UTC m=+0.109453384 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  3 01:49:04 compute-0 podman[404416]: 2025-12-03 01:49:04.936950948 +0000 UTC m=+0.188971483 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:49:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:06 compute-0 podman[404462]: 2025-12-03 01:49:06.90072357 +0000 UTC m=+0.151653708 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec  3 01:49:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:08 compute-0 podman[404482]: 2025-12-03 01:49:08.857673258 +0000 UTC m=+0.114533239 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:49:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1312798595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1312798595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/927659794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/927659794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:49:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2479349272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:49:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:49:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2479349272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:49:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.501 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.502 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.515 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.527 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.528 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:49:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:49:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:23 compute-0 podman[404507]: 2025-12-03 01:49:23.859894143 +0000 UTC m=+0.108493120 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:49:23 compute-0 podman[404508]: 2025-12-03 01:49:23.882816967 +0000 UTC m=+0.133806251 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:49:23 compute-0 podman[404509]: 2025-12-03 01:49:23.914882418 +0000 UTC m=+0.150243913 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:49:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:49:28
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms']
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:49:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:49:28 compute-0 podman[404566]: 2025-12-03 01:49:28.872008453 +0000 UTC m=+0.127047831 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:49:29 compute-0 podman[158098]: time="2025-12-03T01:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:49:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:49:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8123 "" "Go-http-client/1.1"
Dec  3 01:49:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:49:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:49:31 compute-0 openstack_network_exporter[368278]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:49:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:49:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:32 compute-0 podman[404585]: 2025-12-03 01:49:32.877504505 +0000 UTC m=+0.127675769 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, architecture=x86_64)
Dec  3 01:49:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:35 compute-0 podman[404606]: 2025-12-03 01:49:35.877714897 +0000 UTC m=+0.120360903 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 01:49:35 compute-0 podman[404605]: 2025-12-03 01:49:35.916082825 +0000 UTC m=+0.165816881 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 01:49:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:37 compute-0 podman[404648]: 2025-12-03 01:49:37.870136398 +0000 UTC m=+0.126893786 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:49:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:49:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:39 compute-0 podman[404668]: 2025-12-03 01:49:39.863356622 +0000 UTC m=+0.114565030 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:49:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:49:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5902 writes, 24K keys, 5902 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5902 writes, 991 syncs, 5.96 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 01:49:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:49:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7100 writes, 29K keys, 7100 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7100 writes, 1332 syncs, 5.33 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 01:49:49 compute-0 nova_compute[351485]: 2025-12-03 01:49:49.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:49 compute-0 nova_compute[351485]: 2025-12-03 01:49:49.591 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:50 compute-0 nova_compute[351485]: 2025-12-03 01:49:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.595 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.635 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.636 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.637 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.637 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:49:51 compute-0 nova_compute[351485]: 2025-12-03 01:49:51.637 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:49:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:49:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094770831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.125 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.748 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.750 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4610MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.750 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.751 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:49:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.981 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:49:52 compute-0 nova_compute[351485]: 2025-12-03 01:49:52.982 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.039 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:49:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:49:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699787615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.557 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.567 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.583 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.584 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:49:53 compute-0 nova_compute[351485]: 2025-12-03 01:49:53.585 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:49:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:54 compute-0 nova_compute[351485]: 2025-12-03 01:49:54.564 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:54 compute-0 podman[404739]: 2025-12-03 01:49:54.863804474 +0000 UTC m=+0.117597546 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:49:54 compute-0 podman[404741]: 2025-12-03 01:49:54.894493186 +0000 UTC m=+0.136210769 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:49:54 compute-0 podman[404740]: 2025-12-03 01:49:54.906417771 +0000 UTC m=+0.153102533 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 01:49:55 compute-0 nova_compute[351485]: 2025-12-03 01:49:55.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:49:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5889 writes, 24K keys, 5889 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5889 writes, 998 syncs, 5.90 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 01:49:56 compute-0 nova_compute[351485]: 2025-12-03 01:49:56.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:56 compute-0 nova_compute[351485]: 2025-12-03 01:49:56.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:49:56 compute-0 nova_compute[351485]: 2025-12-03 01:49:56.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:49:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 01:49:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  3 01:49:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036824462' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  3 01:49:57 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14375 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  3 01:49:57 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 01:49:57 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 01:49:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:49:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:49:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:49:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:49:59.614 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:49:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:49:59.615 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:49:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:49:59.615 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:49:59 compute-0 podman[158098]: time="2025-12-03T01:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:49:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:49:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8115 "" "Go-http-client/1.1"
Dec  3 01:49:59 compute-0 podman[404800]: 2025-12-03 01:49:59.859873492 +0000 UTC m=+0.115523197 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:49:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:50:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:50:01 compute-0 openstack_network_exporter[368278]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:50:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:50:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:50:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 754c2b14-b6d4-4b61-90a8-e54fa26dddc5 does not exist
Dec  3 01:50:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9b5027fb-b072-4e92-94c1-d82206deaf46 does not exist
Dec  3 01:50:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c1a962a7-4108-48bc-9be6-7ff4b206005d does not exist
Dec  3 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:50:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:50:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:50:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:50:03 compute-0 podman[404974]: 2025-12-03 01:50:03.70433023 +0000 UTC m=+0.143738941 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9)
Dec  3 01:50:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.542840024 +0000 UTC m=+0.054490563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.708935001 +0000 UTC m=+0.220585470 container create 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:50:04 compute-0 systemd[1]: Started libpod-conmon-3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b.scope.
Dec  3 01:50:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.878366543 +0000 UTC m=+0.390017062 container init 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.896521373 +0000 UTC m=+0.408171842 container start 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.907364438 +0000 UTC m=+0.419014957 container attach 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 01:50:04 compute-0 exciting_khayyam[405124]: 167 167
Dec  3 01:50:04 compute-0 systemd[1]: libpod-3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b.scope: Deactivated successfully.
Dec  3 01:50:04 compute-0 podman[405109]: 2025-12-03 01:50:04.912504102 +0000 UTC m=+0.424154561 container died 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbb66b2985e21a9fd26b4cc2f821bfb19dbb838dda15e24af0e79430402c1be2-merged.mount: Deactivated successfully.
Dec  3 01:50:05 compute-0 podman[405109]: 2025-12-03 01:50:05.008985633 +0000 UTC m=+0.520636092 container remove 3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:50:05 compute-0 systemd[1]: libpod-conmon-3916938ac11cbfca8745182d4f2d91e1a87339e01e856f65c3336d896fca4a1b.scope: Deactivated successfully.
Dec  3 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.271869401 +0000 UTC m=+0.094917108 container create f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.234011407 +0000 UTC m=+0.057059164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:50:05 compute-0 systemd[1]: Started libpod-conmon-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope.
Dec  3 01:50:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.450689636 +0000 UTC m=+0.273737353 container init f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.466797509 +0000 UTC m=+0.289845196 container start f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:50:05 compute-0 podman[405146]: 2025-12-03 01:50:05.471854571 +0000 UTC m=+0.294902288 container attach f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:50:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:06 compute-0 frosty_hoover[405161]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:50:06 compute-0 frosty_hoover[405161]: --> relative data size: 1.0
Dec  3 01:50:06 compute-0 frosty_hoover[405161]: --> All data devices are unavailable
Dec  3 01:50:06 compute-0 systemd[1]: libpod-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope: Deactivated successfully.
Dec  3 01:50:06 compute-0 systemd[1]: libpod-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope: Consumed 1.294s CPU time.
Dec  3 01:50:06 compute-0 podman[405215]: 2025-12-03 01:50:06.899016157 +0000 UTC m=+0.047939788 container died f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:50:06 compute-0 podman[405189]: 2025-12-03 01:50:06.902496205 +0000 UTC m=+0.152768604 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed17c8e7890926a99cf1290fa5a2b725939024ecfebf6615feaf99d58d4bf37-merged.mount: Deactivated successfully.
Dec  3 01:50:06 compute-0 podman[405215]: 2025-12-03 01:50:06.985100876 +0000 UTC m=+0.134024457 container remove f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:50:06 compute-0 podman[405188]: 2025-12-03 01:50:06.989050887 +0000 UTC m=+0.240769127 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 01:50:07 compute-0 systemd[1]: libpod-conmon-f8dea570920ea13db44d0b4056f5c685de96deca3c3cfcf4f6e13bd475a10cf7.scope: Deactivated successfully.
Dec  3 01:50:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.133278503 +0000 UTC m=+0.071999145 container create de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.103787904 +0000 UTC m=+0.042508536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:50:08 compute-0 systemd[1]: Started libpod-conmon-de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd.scope.
Dec  3 01:50:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.272077893 +0000 UTC m=+0.210798585 container init de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.301693175 +0000 UTC m=+0.240413777 container start de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.307089727 +0000 UTC m=+0.245810399 container attach de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:50:08 compute-0 wizardly_swartz[405402]: 167 167
Dec  3 01:50:08 compute-0 systemd[1]: libpod-de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd.scope: Deactivated successfully.
Dec  3 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.314253618 +0000 UTC m=+0.252974240 container died de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:50:08 compute-0 podman[405399]: 2025-12-03 01:50:08.353910683 +0000 UTC m=+0.147216228 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, io.buildah.version=1.33.7)
Dec  3 01:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f9b2a8ca99656622e3d15e82e474df4e88606d7e02eb0d9d12712f945b1d9b-merged.mount: Deactivated successfully.
Dec  3 01:50:08 compute-0 podman[405385]: 2025-12-03 01:50:08.389580875 +0000 UTC m=+0.328301497 container remove de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swartz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:50:08 compute-0 systemd[1]: libpod-conmon-de677d3f30fec5e53dc4018e76214f1b013ea89307f40e306900a4ff43822edd.scope: Deactivated successfully.
Dec  3 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.690611805 +0000 UTC m=+0.097870381 container create 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.655149048 +0000 UTC m=+0.062407684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:50:08 compute-0 systemd[1]: Started libpod-conmon-5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6.scope.
Dec  3 01:50:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.859183002 +0000 UTC m=+0.266441578 container init 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.889887735 +0000 UTC m=+0.297146301 container start 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:50:08 compute-0 podman[405441]: 2025-12-03 01:50:08.895223375 +0000 UTC m=+0.302481991 container attach 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]: {
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:    "0": [
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:        {
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "devices": [
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "/dev/loop3"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            ],
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_name": "ceph_lv0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_size": "21470642176",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "name": "ceph_lv0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "tags": {
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cluster_name": "ceph",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.crush_device_class": "",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.encrypted": "0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osd_id": "0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.type": "block",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.vdo": "0"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            },
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "type": "block",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "vg_name": "ceph_vg0"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:        }
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:    ],
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:    "1": [
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:        {
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "devices": [
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "/dev/loop4"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            ],
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_name": "ceph_lv1",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_size": "21470642176",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "name": "ceph_lv1",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "tags": {
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cluster_name": "ceph",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.crush_device_class": "",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.encrypted": "0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osd_id": "1",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.type": "block",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.vdo": "0"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            },
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "type": "block",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "vg_name": "ceph_vg1"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:        }
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:    ],
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:    "2": [
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:        {
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "devices": [
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "/dev/loop5"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            ],
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_name": "ceph_lv2",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_size": "21470642176",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "name": "ceph_lv2",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "tags": {
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.cluster_name": "ceph",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.crush_device_class": "",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.encrypted": "0",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osd_id": "2",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.type": "block",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:                "ceph.vdo": "0"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            },
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "type": "block",
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:            "vg_name": "ceph_vg2"
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:        }
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]:    ]
Dec  3 01:50:09 compute-0 loving_varahamihira[405457]: }
Dec  3 01:50:09 compute-0 systemd[1]: libpod-5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6.scope: Deactivated successfully.
Dec  3 01:50:09 compute-0 podman[405441]: 2025-12-03 01:50:09.767980922 +0000 UTC m=+1.175239478 container died 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00cbb1626a779e3f691459818dc0903dc2867bb68117c8319e2582a0146b151-merged.mount: Deactivated successfully.
Dec  3 01:50:09 compute-0 podman[405441]: 2025-12-03 01:50:09.857771225 +0000 UTC m=+1.265029801 container remove 5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_varahamihira, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:50:09 compute-0 systemd[1]: libpod-conmon-5663bef0cdadd48012e922031eae9c0a63c9dbb651cf1d2ab6de0ab2be13feb6.scope: Deactivated successfully.
Dec  3 01:50:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:10 compute-0 podman[405478]: 2025-12-03 01:50:10.002466381 +0000 UTC m=+0.090252697 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.836064626 +0000 UTC m=+0.083731574 container create 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.802682528 +0000 UTC m=+0.050349506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:50:10 compute-0 systemd[1]: Started libpod-conmon-5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4.scope.
Dec  3 01:50:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.97282658 +0000 UTC m=+0.220493578 container init 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:50:10 compute-0 podman[405642]: 2025-12-03 01:50:10.99275397 +0000 UTC m=+0.240420918 container start 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:50:11 compute-0 podman[405642]: 2025-12-03 01:50:11.000398795 +0000 UTC m=+0.248065773 container attach 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:50:11 compute-0 epic_mclaren[405658]: 167 167
Dec  3 01:50:11 compute-0 systemd[1]: libpod-5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4.scope: Deactivated successfully.
Dec  3 01:50:11 compute-0 podman[405642]: 2025-12-03 01:50:11.006365152 +0000 UTC m=+0.254032090 container died 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-175bb76ab1b7bd3194f448bc2722cd95cc4130cf95d73d5e9d7363f2c50fc834-merged.mount: Deactivated successfully.
Dec  3 01:50:11 compute-0 podman[405642]: 2025-12-03 01:50:11.062654714 +0000 UTC m=+0.310321632 container remove 5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:50:11 compute-0 systemd[1]: libpod-conmon-5434689d21554e9c6984523cd815bc3aa541703c5231c8ce0eea3f9a860db8b4.scope: Deactivated successfully.
Dec  3 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.343779944 +0000 UTC m=+0.084555197 container create 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.305015905 +0000 UTC m=+0.045791198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:50:11 compute-0 systemd[1]: Started libpod-conmon-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope.
Dec  3 01:50:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.491019262 +0000 UTC m=+0.231794475 container init 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.517045064 +0000 UTC m=+0.257820287 container start 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 01:50:11 compute-0 podman[405680]: 2025-12-03 01:50:11.523355181 +0000 UTC m=+0.264130394 container attach 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:50:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]: {
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "osd_id": 2,
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "type": "bluestore"
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:    },
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "osd_id": 1,
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "type": "bluestore"
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:    },
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "osd_id": 0,
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:        "type": "bluestore"
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]:    }
Dec  3 01:50:12 compute-0 nervous_engelbart[405696]: }
Dec  3 01:50:12 compute-0 systemd[1]: libpod-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope: Deactivated successfully.
Dec  3 01:50:12 compute-0 podman[405680]: 2025-12-03 01:50:12.757120603 +0000 UTC m=+1.497895816 container died 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:50:12 compute-0 systemd[1]: libpod-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope: Consumed 1.238s CPU time.
Dec  3 01:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc07b9a25cea84465a4162539c1ca13befc7f766add458236d3b44894712da87-merged.mount: Deactivated successfully.
Dec  3 01:50:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:12 compute-0 podman[405680]: 2025-12-03 01:50:12.854987453 +0000 UTC m=+1.595762696 container remove 1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:50:12 compute-0 systemd[1]: libpod-conmon-1d471b02f9a7bf9a4eb1a2ae610b39f3d448375a6d03b878ef3954d710e38c82.scope: Deactivated successfully.
Dec  3 01:50:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:50:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:50:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:50:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:50:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 420ec8c0-112d-40ca-ac4f-268d8e658c85 does not exist
Dec  3 01:50:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b5250900-8546-40b5-901e-9aa64fdc34a2 does not exist
Dec  3 01:50:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:50:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:50:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  3 01:50:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3063780719' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  3 01:50:17 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  3 01:50:17 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 01:50:17 compute-0 ceph-mgr[193109]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 01:50:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:26 compute-0 podman[405792]: 2025-12-03 01:50:26.160792974 +0000 UTC m=+0.082330391 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 01:50:26 compute-0 podman[405794]: 2025-12-03 01:50:26.203155158 +0000 UTC m=+0.101730408 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:50:26 compute-0 podman[405793]: 2025-12-03 01:50:26.219810027 +0000 UTC m=+0.125325422 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:50:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:50:28
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'backups', 'cephfs.cephfs.data']
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:50:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:50:29 compute-0 podman[158098]: time="2025-12-03T01:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:50:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:50:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8111 "" "Go-http-client/1.1"
Dec  3 01:50:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:30 compute-0 podman[405852]: 2025-12-03 01:50:30.904251944 +0000 UTC m=+0.158053964 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible)
Dec  3 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:50:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:50:31 compute-0 openstack_network_exporter[368278]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:50:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:50:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:34 compute-0 podman[405871]: 2025-12-03 01:50:34.870423698 +0000 UTC m=+0.123476230 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, version=9.4)
Dec  3 01:50:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:37 compute-0 podman[405891]: 2025-12-03 01:50:37.863307617 +0000 UTC m=+0.109050504 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:50:37 compute-0 podman[405890]: 2025-12-03 01:50:37.896756649 +0000 UTC m=+0.151473308 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:50:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:50:38 compute-0 podman[405938]: 2025-12-03 01:50:38.895806315 +0000 UTC m=+0.143446522 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:50:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:40 compute-0 podman[405959]: 2025-12-03 01:50:40.877083396 +0000 UTC m=+0.128900613 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:50:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:50:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1476334581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:50:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:50:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1476334581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:50:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.595 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.595 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.631 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.631 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:50:51 compute-0 nova_compute[351485]: 2025-12-03 01:50:51.631 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:50:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:50:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428749513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.127 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.643 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.644 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4585MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.645 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.645 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.780 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.781 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:50:52 compute-0 nova_compute[351485]: 2025-12-03 01:50:52.815 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:50:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:50:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983472691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.327 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.338 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.358 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.362 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:50:53 compute-0 nova_compute[351485]: 2025-12-03 01:50:53.363 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:50:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:54 compute-0 nova_compute[351485]: 2025-12-03 01:50:54.344 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:54 compute-0 nova_compute[351485]: 2025-12-03 01:50:54.344 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:55 compute-0 nova_compute[351485]: 2025-12-03 01:50:55.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:56 compute-0 podman[406030]: 2025-12-03 01:50:56.872462184 +0000 UTC m=+0.121114674 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  3 01:50:56 compute-0 podman[406031]: 2025-12-03 01:50:56.873511534 +0000 UTC m=+0.114592460 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 01:50:56 compute-0 podman[406032]: 2025-12-03 01:50:56.923589845 +0000 UTC m=+0.159183287 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:50:57 compute-0 nova_compute[351485]: 2025-12-03 01:50:57.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:50:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:50:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:50:58 compute-0 nova_compute[351485]: 2025-12-03 01:50:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:50:58 compute-0 nova_compute[351485]: 2025-12-03 01:50:58.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:50:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:50:59.615 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:50:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:50:59.616 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:50:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:50:59.616 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:50:59 compute-0 podman[158098]: time="2025-12-03T01:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:50:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:50:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8116 "" "Go-http-client/1.1"
Dec  3 01:51:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:51:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:51:01 compute-0 openstack_network_exporter[368278]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:51:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:51:01 compute-0 podman[406088]: 2025-12-03 01:51:01.853190542 +0000 UTC m=+0.110007340 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:51:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:05 compute-0 systemd-logind[800]: New session 60 of user zuul.
Dec  3 01:51:05 compute-0 systemd[1]: Started Session 60 of User zuul.
Dec  3 01:51:05 compute-0 podman[406109]: 2025-12-03 01:51:05.283397633 +0000 UTC m=+0.127666028 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9)
Dec  3 01:51:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:06 compute-0 python3[406303]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:51:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:08 compute-0 podman[406471]: 2025-12-03 01:51:08.905511251 +0000 UTC m=+0.148037042 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:51:08 compute-0 podman[406467]: 2025-12-03 01:51:08.956472177 +0000 UTC m=+0.201966141 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:51:09 compute-0 podman[406553]: 2025-12-03 01:51:09.071431526 +0000 UTC m=+0.105364990 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 01:51:09 compute-0 python3[406601]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:51:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:10 compute-0 python3[406755]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:51:11 compute-0 podman[406759]: 2025-12-03 01:51:11.419015085 +0000 UTC m=+0.149412131 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 01:51:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:13 compute-0 python3[407013]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 01:51:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:51:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:51:14 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:15 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:15 compute-0 python3[407316]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 01:51:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.491273338 +0000 UTC m=+0.086000984 container create 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.45299552 +0000 UTC m=+0.047723186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:16 compute-0 systemd[1]: Started libpod-conmon-24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3.scope.
Dec  3 01:51:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.63828739 +0000 UTC m=+0.233015106 container init 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.658384457 +0000 UTC m=+0.253112103 container start 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.665226759 +0000 UTC m=+0.259954405 container attach 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:51:16 compute-0 nifty_hypatia[407573]: 167 167
Dec  3 01:51:16 compute-0 systemd[1]: libpod-24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3.scope: Deactivated successfully.
Dec  3 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.67165057 +0000 UTC m=+0.266378226 container died 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-70b47958293d763e373e5f3abba9f940c491c23416d758b6713d69329a1aa575-merged.mount: Deactivated successfully.
Dec  3 01:51:16 compute-0 podman[407541]: 2025-12-03 01:51:16.756019497 +0000 UTC m=+0.350747153 container remove 24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:51:16 compute-0 systemd[1]: libpod-conmon-24ae28b6e0f5a3d687916e76c1eb81e2cf584df5fc961b0175b662970c1970d3.scope: Deactivated successfully.
Dec  3 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.042887139 +0000 UTC m=+0.081668981 container create 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.011372112 +0000 UTC m=+0.050154024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:17 compute-0 systemd[1]: Started libpod-conmon-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope.
Dec  3 01:51:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.247949437 +0000 UTC m=+0.286731349 container init 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.267768055 +0000 UTC m=+0.306549917 container start 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:51:17 compute-0 podman[407617]: 2025-12-03 01:51:17.275617416 +0000 UTC m=+0.314399348 container attach 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:51:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:18 compute-0 python3[407773]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.502 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.503 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.503 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.506 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.511 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.515 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.516 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e675af60>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:51:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:51:19 compute-0 python3[409462]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]: [
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:    {
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "available": false,
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "ceph_device": false,
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "lsm_data": {},
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "lvs": [],
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "path": "/dev/sr0",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "rejected_reasons": [
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "Has a FileSystem",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "Insufficient space (<5GB)"
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        ],
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        "sys_api": {
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "actuators": null,
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "device_nodes": "sr0",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "devname": "sr0",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "human_readable_size": "482.00 KB",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "id_bus": "ata",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "model": "QEMU DVD-ROM",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "nr_requests": "2",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "parent": "/dev/sr0",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "partitions": {},
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "path": "/dev/sr0",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "removable": "1",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "rev": "2.5+",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "ro": "0",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "rotational": "1",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "sas_address": "",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "sas_device_handle": "",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "scheduler_mode": "mq-deadline",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "sectors": 0,
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "sectorsize": "2048",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "size": 493568.0,
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "support_discard": "2048",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "type": "disk",
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:            "vendor": "QEMU"
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:        }
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]:    }
Dec  3 01:51:19 compute-0 xenodochial_napier[407636]: ]
Dec  3 01:51:19 compute-0 systemd[1]: libpod-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope: Deactivated successfully.
Dec  3 01:51:19 compute-0 podman[407617]: 2025-12-03 01:51:19.843214054 +0000 UTC m=+2.881995916 container died 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 01:51:19 compute-0 systemd[1]: libpod-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope: Consumed 2.684s CPU time.
Dec  3 01:51:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-071a75421b90bf0a32a020b424f0effab78eef0487e003a061979bdd84006258-merged.mount: Deactivated successfully.
Dec  3 01:51:19 compute-0 podman[407617]: 2025-12-03 01:51:19.933340173 +0000 UTC m=+2.972122015 container remove 54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:51:19 compute-0 systemd[1]: libpod-conmon-54e8456559c7cb65fb4261699b2d65a211c2f9ee7d0c1913edbe7cdd7ee6d482.scope: Deactivated successfully.
Dec  3 01:51:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4247a20b-32ca-402c-990e-01131ac5de11 does not exist
Dec  3 01:51:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 288c4e51-ad35-491b-90bd-b2456f5c38b2 does not exist
Dec  3 01:51:20 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 59993959-cf3b-4c42-ba46-461371feddb1 does not exist
Dec  3 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:51:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:51:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:51:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Dec  3 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:21 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.263393516 +0000 UTC m=+0.081054425 container create acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.23230838 +0000 UTC m=+0.049969309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:21 compute-0 systemd[1]: Started libpod-conmon-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope.
Dec  3 01:51:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.429072454 +0000 UTC m=+0.246733373 container init acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.446606768 +0000 UTC m=+0.264267687 container start acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.454052468 +0000 UTC m=+0.271713387 container attach acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:51:21 compute-0 zealous_joliot[410403]: 167 167
Dec  3 01:51:21 compute-0 systemd[1]: libpod-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope: Deactivated successfully.
Dec  3 01:51:21 compute-0 conmon[410403]: conmon acc25027fcb4efc52d61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope/container/memory.events
Dec  3 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.466243931 +0000 UTC m=+0.283904850 container died acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7803730e138b5061f11ffea5e76b310e9463a9a89de75ad4755744715624562d-merged.mount: Deactivated successfully.
Dec  3 01:51:21 compute-0 podman[410387]: 2025-12-03 01:51:21.540292367 +0000 UTC m=+0.357953256 container remove acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:51:21 compute-0 systemd[1]: libpod-conmon-acc25027fcb4efc52d61dfb9ce579d21c97c3d3b3f004c5030b2c1b9949cc9de.scope: Deactivated successfully.
Dec  3 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.794100337 +0000 UTC m=+0.073146572 container create 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 01:51:21 compute-0 systemd[1]: Started libpod-conmon-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope.
Dec  3 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.770808221 +0000 UTC m=+0.049854496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.93689347 +0000 UTC m=+0.215939735 container init 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.95251154 +0000 UTC m=+0.231557755 container start 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:51:21 compute-0 podman[410426]: 2025-12-03 01:51:21.956661127 +0000 UTC m=+0.235707402 container attach 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:51:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Dec  3 01:51:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:23 compute-0 wizardly_williamson[410441]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:51:23 compute-0 wizardly_williamson[410441]: --> relative data size: 1.0
Dec  3 01:51:23 compute-0 wizardly_williamson[410441]: --> All data devices are unavailable
Dec  3 01:51:23 compute-0 systemd[1]: libpod-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope: Deactivated successfully.
Dec  3 01:51:23 compute-0 systemd[1]: libpod-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope: Consumed 1.211s CPU time.
Dec  3 01:51:23 compute-0 podman[410470]: 2025-12-03 01:51:23.32965556 +0000 UTC m=+0.065616280 container died 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:51:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f1913886ebb912fc032013f762220ffc23f1ddf0075cb523a73054e86ea5c7-merged.mount: Deactivated successfully.
Dec  3 01:51:23 compute-0 podman[410470]: 2025-12-03 01:51:23.406310649 +0000 UTC m=+0.142271339 container remove 30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:51:23 compute-0 systemd[1]: libpod-conmon-30b5e7d17f184e846187c5cfbc7ab3520722e378bbb04a3886f486bf2eba31ba.scope: Deactivated successfully.
Dec  3 01:51:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Dec  3 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.608488919 +0000 UTC m=+0.079079219 container create 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.576519228 +0000 UTC m=+0.047109578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:24 compute-0 systemd[1]: Started libpod-conmon-46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5.scope.
Dec  3 01:51:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.751276882 +0000 UTC m=+0.221867162 container init 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.769114424 +0000 UTC m=+0.239704724 container start 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.775972008 +0000 UTC m=+0.246562288 container attach 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:51:24 compute-0 quirky_austin[410639]: 167 167
Dec  3 01:51:24 compute-0 systemd[1]: libpod-46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5.scope: Deactivated successfully.
Dec  3 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.781228656 +0000 UTC m=+0.251818956 container died 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b75217b50172e6fb57cc9ed480fb6a4d97402f6cd95d6aec722ab555224099-merged.mount: Deactivated successfully.
Dec  3 01:51:24 compute-0 podman[410623]: 2025-12-03 01:51:24.846306729 +0000 UTC m=+0.316897009 container remove 46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:51:24 compute-0 systemd[1]: libpod-conmon-46e6c091c35f0a52489d6c9021f23e8d61bc034f37239a2f5e936ca076ec25e5.scope: Deactivated successfully.
Dec  3 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.091168738 +0000 UTC m=+0.096873520 container create 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.055236626 +0000 UTC m=+0.060941478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:25 compute-0 systemd[1]: Started libpod-conmon-1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02.scope.
Dec  3 01:51:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.253405118 +0000 UTC m=+0.259109960 container init 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.290189344 +0000 UTC m=+0.295894136 container start 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 01:51:25 compute-0 podman[410661]: 2025-12-03 01:51:25.296449481 +0000 UTC m=+0.302154263 container attach 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:51:26 compute-0 sharp_elion[410677]: {
Dec  3 01:51:26 compute-0 sharp_elion[410677]:    "0": [
Dec  3 01:51:26 compute-0 sharp_elion[410677]:        {
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "devices": [
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "/dev/loop3"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            ],
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_name": "ceph_lv0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_size": "21470642176",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "name": "ceph_lv0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "tags": {
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cluster_name": "ceph",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.crush_device_class": "",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.encrypted": "0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osd_id": "0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.type": "block",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.vdo": "0"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            },
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "type": "block",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "vg_name": "ceph_vg0"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:        }
Dec  3 01:51:26 compute-0 sharp_elion[410677]:    ],
Dec  3 01:51:26 compute-0 sharp_elion[410677]:    "1": [
Dec  3 01:51:26 compute-0 sharp_elion[410677]:        {
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "devices": [
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "/dev/loop4"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            ],
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_name": "ceph_lv1",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_size": "21470642176",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "name": "ceph_lv1",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "tags": {
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cluster_name": "ceph",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.crush_device_class": "",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.encrypted": "0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osd_id": "1",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.type": "block",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.vdo": "0"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            },
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "type": "block",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "vg_name": "ceph_vg1"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:        }
Dec  3 01:51:26 compute-0 sharp_elion[410677]:    ],
Dec  3 01:51:26 compute-0 sharp_elion[410677]:    "2": [
Dec  3 01:51:26 compute-0 sharp_elion[410677]:        {
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "devices": [
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "/dev/loop5"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            ],
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_name": "ceph_lv2",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_size": "21470642176",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "name": "ceph_lv2",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "tags": {
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.cluster_name": "ceph",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.crush_device_class": "",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.encrypted": "0",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osd_id": "2",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.type": "block",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:                "ceph.vdo": "0"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            },
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "type": "block",
Dec  3 01:51:26 compute-0 sharp_elion[410677]:            "vg_name": "ceph_vg2"
Dec  3 01:51:26 compute-0 sharp_elion[410677]:        }
Dec  3 01:51:26 compute-0 sharp_elion[410677]:    ]
Dec  3 01:51:26 compute-0 sharp_elion[410677]: }
Dec  3 01:51:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:51:26 compute-0 systemd[1]: libpod-1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02.scope: Deactivated successfully.
Dec  3 01:51:26 compute-0 podman[410661]: 2025-12-03 01:51:26.115130226 +0000 UTC m=+1.120835018 container died 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d4e39b12c6af3a2b217e2c77fa44051bfa341faf8b81ffe7c7502df528295d-merged.mount: Deactivated successfully.
Dec  3 01:51:26 compute-0 podman[410661]: 2025-12-03 01:51:26.227747659 +0000 UTC m=+1.233452421 container remove 1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:51:26 compute-0 systemd[1]: libpod-conmon-1d177dfb8f072b0a3ef1853e7738783d2e7f8b57920f4dc39433a574d1ae0a02.scope: Deactivated successfully.
Dec  3 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.403990758 +0000 UTC m=+0.090353697 container create 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.374979321 +0000 UTC m=+0.061342290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:27 compute-0 systemd[1]: Started libpod-conmon-716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c.scope.
Dec  3 01:51:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.523292619 +0000 UTC m=+0.209655608 container init 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.542803519 +0000 UTC m=+0.229166458 container start 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.549478477 +0000 UTC m=+0.235841466 container attach 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:51:27 compute-0 naughty_lamarr[410862]: 167 167
Dec  3 01:51:27 compute-0 systemd[1]: libpod-716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c.scope: Deactivated successfully.
Dec  3 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.552456541 +0000 UTC m=+0.238819480 container died 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5944b72055c40ca0a076dc3e8c1fe3260529f02f41a0a8f3fcbb1c283248f41-merged.mount: Deactivated successfully.
Dec  3 01:51:27 compute-0 podman[410840]: 2025-12-03 01:51:27.61277565 +0000 UTC m=+0.299138549 container remove 716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamarr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 01:51:27 compute-0 podman[410857]: 2025-12-03 01:51:27.619752967 +0000 UTC m=+0.144974826 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Dec  3 01:51:27 compute-0 podman[410856]: 2025-12-03 01:51:27.624070949 +0000 UTC m=+0.154372041 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:51:27 compute-0 podman[410858]: 2025-12-03 01:51:27.624250424 +0000 UTC m=+0.146393146 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:51:27 compute-0 systemd[1]: libpod-conmon-716d745033bb33d9918a6825f1318a5ffc309f8447afc57f81b05d8e497fde7c.scope: Deactivated successfully.
Dec  3 01:51:27 compute-0 podman[410940]: 2025-12-03 01:51:27.850471637 +0000 UTC m=+0.079596413 container create e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:51:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:27 compute-0 podman[410940]: 2025-12-03 01:51:27.818775214 +0000 UTC m=+0.047900030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:51:27 compute-0 systemd[1]: Started libpod-conmon-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope.
Dec  3 01:51:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:51:28 compute-0 podman[410940]: 2025-12-03 01:51:28.03797144 +0000 UTC m=+0.267096206 container init e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  3 01:51:28 compute-0 podman[410940]: 2025-12-03 01:51:28.060409002 +0000 UTC m=+0.289533788 container start e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 01:51:28 compute-0 podman[410940]: 2025-12-03 01:51:28.067065039 +0000 UTC m=+0.296189785 container attach e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:51:28
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:51:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:51:29 compute-0 objective_jackson[410956]: {
Dec  3 01:51:29 compute-0 objective_jackson[410956]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "osd_id": 2,
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "type": "bluestore"
Dec  3 01:51:29 compute-0 objective_jackson[410956]:    },
Dec  3 01:51:29 compute-0 objective_jackson[410956]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "osd_id": 1,
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "type": "bluestore"
Dec  3 01:51:29 compute-0 objective_jackson[410956]:    },
Dec  3 01:51:29 compute-0 objective_jackson[410956]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "osd_id": 0,
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:51:29 compute-0 objective_jackson[410956]:        "type": "bluestore"
Dec  3 01:51:29 compute-0 objective_jackson[410956]:    }
Dec  3 01:51:29 compute-0 objective_jackson[410956]: }
Dec  3 01:51:29 compute-0 systemd[1]: libpod-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope: Deactivated successfully.
Dec  3 01:51:29 compute-0 systemd[1]: libpod-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope: Consumed 1.221s CPU time.
Dec  3 01:51:29 compute-0 podman[410940]: 2025-12-03 01:51:29.282717688 +0000 UTC m=+1.511842514 container died e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad53548dca1152339d64d9f423771656242363e46c35c3ac394a016d5ff477c-merged.mount: Deactivated successfully.
Dec  3 01:51:29 compute-0 podman[410940]: 2025-12-03 01:51:29.384948998 +0000 UTC m=+1.614073784 container remove e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:51:29 compute-0 systemd[1]: libpod-conmon-e3f283bd94e21abdee3f94eb659cfdc691e68764c9d7f69848cd428cd4084bb5.scope: Deactivated successfully.
Dec  3 01:51:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:51:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:51:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 11bc2454-f008-4729-83b0-295c03e45fcc does not exist
Dec  3 01:51:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b8446ea6-38e4-4c1a-b026-e561b2ce87d0 does not exist
Dec  3 01:51:29 compute-0 podman[158098]: time="2025-12-03T01:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:51:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:51:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8122 "" "Go-http-client/1.1"
Dec  3 01:51:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 01:51:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:30 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:51:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:51:31 compute-0 openstack_network_exporter[368278]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:51:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:51:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Dec  3 01:51:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:32 compute-0 podman[411049]: 2025-12-03 01:51:32.904327572 +0000 UTC m=+0.152296263 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:51:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Dec  3 01:51:35 compute-0 podman[411068]: 2025-12-03 01:51:35.88150338 +0000 UTC m=+0.131142606 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, name=ubi9)
Dec  3 01:51:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.554275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697554323, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3504778, "memory_usage": 3561728, "flush_reason": "Manual Compaction"}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697585202, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3417041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20887, "largest_seqno": 22940, "table_properties": {"data_size": 3407676, "index_size": 5923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18587, "raw_average_key_size": 19, "raw_value_size": 3389090, "raw_average_value_size": 3640, "num_data_blocks": 268, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726470, "oldest_key_time": 1764726470, "file_creation_time": 1764726697, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 31054 microseconds, and 12858 cpu microseconds.
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.585297) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3417041 bytes OK
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.585353) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.588109) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.588127) EVENT_LOG_v1 {"time_micros": 1764726697588121, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.588147) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3496173, prev total WAL file size 3496173, number of live WAL files 2.
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.590126) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3336KB)], [50(7387KB)]
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697590222, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10981484, "oldest_snapshot_seqno": -1}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4691 keys, 9252306 bytes, temperature: kUnknown
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697681748, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9252306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9218321, "index_size": 21139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 114847, "raw_average_key_size": 24, "raw_value_size": 9130825, "raw_average_value_size": 1946, "num_data_blocks": 892, "num_entries": 4691, "num_filter_entries": 4691, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726697, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.682013) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9252306 bytes
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.683952) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.9 rd, 101.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5205, records dropped: 514 output_compression: NoCompression
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.683970) EVENT_LOG_v1 {"time_micros": 1764726697683961, "job": 26, "event": "compaction_finished", "compaction_time_micros": 91578, "compaction_time_cpu_micros": 41059, "output_level": 6, "num_output_files": 1, "total_output_size": 9252306, "num_input_records": 5205, "num_output_records": 4691, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697684794, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726697686382, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.589599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:51:37 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:51:37.686828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:51:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:51:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:51:39 compute-0 podman[411088]: 2025-12-03 01:51:39.908839674 +0000 UTC m=+0.150948123 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec  3 01:51:39 compute-0 podman[411089]: 2025-12-03 01:51:39.922061147 +0000 UTC m=+0.153543577 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:51:39 compute-0 podman[411087]: 2025-12-03 01:51:39.947671789 +0000 UTC m=+0.195810218 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:51:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:41 compute-0 podman[411148]: 2025-12-03 01:51:41.853077441 +0000 UTC m=+0.100579725 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:51:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:51:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/661218190' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:51:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:51:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/661218190' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:51:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:50 compute-0 nova_compute[351485]: 2025-12-03 01:51:50.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:51:51 compute-0 nova_compute[351485]: 2025-12-03 01:51:51.598 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:51:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.624 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:51:52 compute-0 nova_compute[351485]: 2025-12-03 01:51:52.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:51:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:51:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392057991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.104 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.694 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.698 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4540MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.699 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:51:53 compute-0 nova_compute[351485]: 2025-12-03 01:51:53.700 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.079 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.080 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:51:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.173 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.283 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.284 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.305 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.339 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.364 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:51:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:51:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3041397874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.891 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.902 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.923 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.925 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.926 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.927 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 01:51:54 compute-0 nova_compute[351485]: 2025-12-03 01:51:54.995 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 01:51:55 compute-0 nova_compute[351485]: 2025-12-03 01:51:55.996 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:55 compute-0 nova_compute[351485]: 2025-12-03 01:51:55.997 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:55 compute-0 nova_compute[351485]: 2025-12-03 01:51:55.998 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:57 compute-0 nova_compute[351485]: 2025-12-03 01:51:57.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:57 compute-0 podman[411222]: 2025-12-03 01:51:57.872339352 +0000 UTC m=+0.118221002 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 01:51:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:51:57 compute-0 podman[411221]: 2025-12-03 01:51:57.88398558 +0000 UTC m=+0.132604667 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  3 01:51:57 compute-0 podman[411223]: 2025-12-03 01:51:57.908710937 +0000 UTC m=+0.146679384 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:51:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:51:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 01:51:59 compute-0 nova_compute[351485]: 2025-12-03 01:51:59.595 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:51:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:51:59.617 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:51:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:51:59.617 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:51:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:51:59.618 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:51:59 compute-0 podman[158098]: time="2025-12-03T01:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:51:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:51:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec  3 01:52:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:00 compute-0 nova_compute[351485]: 2025-12-03 01:52:00.615 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:00 compute-0 nova_compute[351485]: 2025-12-03 01:52:00.615 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:52:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:52:01 compute-0 openstack_network_exporter[368278]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:52:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:52:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:03 compute-0 podman[411281]: 2025-12-03 01:52:03.906364352 +0000 UTC m=+0.157055726 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:52:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:06 compute-0 podman[411299]: 2025-12-03 01:52:06.910445678 +0000 UTC m=+0.163131237 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git)
Dec  3 01:52:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:10 compute-0 podman[411319]: 2025-12-03 01:52:10.890412899 +0000 UTC m=+0.127423931 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Dec  3 01:52:10 compute-0 podman[411320]: 2025-12-03 01:52:10.909308942 +0000 UTC m=+0.142308461 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  3 01:52:10 compute-0 podman[411318]: 2025-12-03 01:52:10.947854088 +0000 UTC m=+0.189373177 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 01:52:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:12 compute-0 podman[411378]: 2025-12-03 01:52:12.856311426 +0000 UTC m=+0.114325602 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:52:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:19 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Dec  3 01:52:19 compute-0 systemd[1]: session-60.scope: Consumed 12.140s CPU time.
Dec  3 01:52:19 compute-0 systemd-logind[800]: Session 60 logged out. Waiting for processes to exit.
Dec  3 01:52:19 compute-0 systemd-logind[800]: Removed session 60.
Dec  3 01:52:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:28 compute-0 podman[411407]: 2025-12-03 01:52:28.291862642 +0000 UTC m=+0.106774939 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:52:28 compute-0 podman[411406]: 2025-12-03 01:52:28.296144173 +0000 UTC m=+0.115427443 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Dec  3 01:52:28 compute-0 podman[411405]: 2025-12-03 01:52:28.320020266 +0000 UTC m=+0.142073434 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:52:28
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log']
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:52:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:52:29 compute-0 podman[158098]: time="2025-12-03T01:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:52:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:52:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Dec  3 01:52:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:52:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4037af5e-c42c-45ef-b7ef-194df5cc87d8 does not exist
Dec  3 01:52:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1c656be8-b62b-4e21-a19b-6c891773f6bf does not exist
Dec  3 01:52:30 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7b669d7a-e871-4502-b191-c430729dca35 does not exist
Dec  3 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:52:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:52:30 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:52:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:52:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:52:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:52:31 compute-0 openstack_network_exporter[368278]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:52:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:52:31 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.124962014 +0000 UTC m=+0.092270970 container create 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 01:52:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.088352753 +0000 UTC m=+0.055661709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:52:32 compute-0 systemd[1]: Started libpod-conmon-38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af.scope.
Dec  3 01:52:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.285812286 +0000 UTC m=+0.253121292 container init 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.303387431 +0000 UTC m=+0.270696387 container start 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.310217214 +0000 UTC m=+0.277526210 container attach 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 01:52:32 compute-0 relaxed_bassi[411745]: 167 167
Dec  3 01:52:32 compute-0 systemd[1]: libpod-38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af.scope: Deactivated successfully.
Dec  3 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.316694016 +0000 UTC m=+0.284002982 container died 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a212d7359d29775cc07ae3d07bd617b3b6e4c036b7016dd2411e9c3b54da8f99-merged.mount: Deactivated successfully.
Dec  3 01:52:32 compute-0 podman[411729]: 2025-12-03 01:52:32.404115159 +0000 UTC m=+0.371424095 container remove 38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:52:32 compute-0 systemd[1]: libpod-conmon-38ea1781e3a576b5b67c2a3ce14b9191081fd5d1ec1e8f409c94630dbdeb36af.scope: Deactivated successfully.
Dec  3 01:52:32 compute-0 podman[411767]: 2025-12-03 01:52:32.649936955 +0000 UTC m=+0.054507467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:52:32 compute-0 podman[411767]: 2025-12-03 01:52:32.808175223 +0000 UTC m=+0.212745685 container create 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:52:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:32 compute-0 systemd[1]: Started libpod-conmon-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope.
Dec  3 01:52:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:33 compute-0 podman[411767]: 2025-12-03 01:52:33.000591204 +0000 UTC m=+0.405161676 container init 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec  3 01:52:33 compute-0 podman[411767]: 2025-12-03 01:52:33.033686617 +0000 UTC m=+0.438257069 container start 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 01:52:33 compute-0 podman[411767]: 2025-12-03 01:52:33.040731915 +0000 UTC m=+0.445302387 container attach 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 01:52:33 compute-0 nova_compute[351485]: 2025-12-03 01:52:33.325 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:34 compute-0 cool_mirzakhani[411783]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:52:34 compute-0 cool_mirzakhani[411783]: --> relative data size: 1.0
Dec  3 01:52:34 compute-0 cool_mirzakhani[411783]: --> All data devices are unavailable
Dec  3 01:52:34 compute-0 systemd[1]: libpod-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope: Deactivated successfully.
Dec  3 01:52:34 compute-0 podman[411767]: 2025-12-03 01:52:34.249467169 +0000 UTC m=+1.654037621 container died 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:52:34 compute-0 systemd[1]: libpod-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope: Consumed 1.160s CPU time.
Dec  3 01:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6dc2a8d5ba83e2036a5894a3599beae635f1ba8a5cddb5ca93477ee03f5b4f-merged.mount: Deactivated successfully.
Dec  3 01:52:34 compute-0 podman[411767]: 2025-12-03 01:52:34.331017866 +0000 UTC m=+1.735588288 container remove 47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:52:34 compute-0 systemd[1]: libpod-conmon-47ef7282368f7331d924c76be29e8914a3e20e09bbc1036bfd605fa061e84049.scope: Deactivated successfully.
Dec  3 01:52:34 compute-0 podman[411813]: 2025-12-03 01:52:34.411164365 +0000 UTC m=+0.114661272 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  3 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.506633068 +0000 UTC m=+0.089721999 container create 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.472853716 +0000 UTC m=+0.055942697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:52:35 compute-0 systemd[1]: Started libpod-conmon-7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3.scope.
Dec  3 01:52:35 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.699692697 +0000 UTC m=+0.282781688 container init 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.717861399 +0000 UTC m=+0.300950320 container start 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:52:35 compute-0 podman[411983]: 2025-12-03 01:52:35.724515887 +0000 UTC m=+0.307604808 container attach 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:52:35 compute-0 ecstatic_ritchie[412000]: 167 167
Dec  3 01:52:35 compute-0 systemd[1]: libpod-7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3.scope: Deactivated successfully.
Dec  3 01:52:35 compute-0 podman[412005]: 2025-12-03 01:52:35.811879618 +0000 UTC m=+0.054086965 container died 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:52:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3e3e46d88c5bf1786dacf4417ef3e49574f978b4d586dc9cd0e44358ec6f27b-merged.mount: Deactivated successfully.
Dec  3 01:52:35 compute-0 podman[412005]: 2025-12-03 01:52:35.88826476 +0000 UTC m=+0.130472107 container remove 7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ritchie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 01:52:35 compute-0 systemd[1]: libpod-conmon-7baa2898b14d0bddda8730354b4c0af20b7a637b6e3eb4a97f1a4e51e3ec11e3.scope: Deactivated successfully.
Dec  3 01:52:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.162755363 +0000 UTC m=+0.066718980 container create 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:52:36 compute-0 systemd[1]: Started libpod-conmon-272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e.scope.
Dec  3 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.141936087 +0000 UTC m=+0.045899734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:52:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.335011707 +0000 UTC m=+0.238975394 container init 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.355241127 +0000 UTC m=+0.259204784 container start 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  3 01:52:36 compute-0 podman[412026]: 2025-12-03 01:52:36.36245315 +0000 UTC m=+0.266416867 container attach 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]: {
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:    "0": [
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:        {
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "devices": [
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "/dev/loop3"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            ],
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_name": "ceph_lv0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_size": "21470642176",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "name": "ceph_lv0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "tags": {
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cluster_name": "ceph",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.crush_device_class": "",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.encrypted": "0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osd_id": "0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.type": "block",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.vdo": "0"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            },
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "type": "block",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "vg_name": "ceph_vg0"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:        }
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:    ],
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:    "1": [
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:        {
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "devices": [
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "/dev/loop4"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            ],
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_name": "ceph_lv1",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_size": "21470642176",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "name": "ceph_lv1",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "tags": {
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cluster_name": "ceph",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.crush_device_class": "",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.encrypted": "0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osd_id": "1",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.type": "block",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.vdo": "0"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            },
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "type": "block",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "vg_name": "ceph_vg1"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:        }
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:    ],
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:    "2": [
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:        {
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "devices": [
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "/dev/loop5"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            ],
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_name": "ceph_lv2",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_size": "21470642176",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "name": "ceph_lv2",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "tags": {
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.cluster_name": "ceph",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.crush_device_class": "",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.encrypted": "0",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osd_id": "2",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.type": "block",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:                "ceph.vdo": "0"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            },
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "type": "block",
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:            "vg_name": "ceph_vg2"
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:        }
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]:    ]
Dec  3 01:52:37 compute-0 nervous_blackwell[412040]: }
Dec  3 01:52:37 compute-0 systemd[1]: libpod-272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e.scope: Deactivated successfully.
Dec  3 01:52:37 compute-0 podman[412026]: 2025-12-03 01:52:37.180110405 +0000 UTC m=+1.084074052 container died 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c295cc2645e1d3d10ba6fb0667c8f9ba61c9fa40f7124fee6a3048df205ca29b-merged.mount: Deactivated successfully.
Dec  3 01:52:37 compute-0 podman[412026]: 2025-12-03 01:52:37.285472714 +0000 UTC m=+1.189436331 container remove 272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:52:37 compute-0 systemd[1]: libpod-conmon-272a113831c0a5d89218f01480c6af74403796373c23cad0dc58964f32101d7e.scope: Deactivated successfully.
Dec  3 01:52:37 compute-0 podman[412050]: 2025-12-03 01:52:37.356239957 +0000 UTC m=+0.127203834 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.expose-services=)
Dec  3 01:52:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:52:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.371977795 +0000 UTC m=+0.096931032 container create 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.332377779 +0000 UTC m=+0.057331066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:52:38 compute-0 systemd[1]: Started libpod-conmon-3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645.scope.
Dec  3 01:52:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.526498828 +0000 UTC m=+0.251452105 container init 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.544374742 +0000 UTC m=+0.269327969 container start 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.550899066 +0000 UTC m=+0.275852353 container attach 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 01:52:38 compute-0 festive_rosalind[412232]: 167 167
Dec  3 01:52:38 compute-0 systemd[1]: libpod-3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645.scope: Deactivated successfully.
Dec  3 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.55815104 +0000 UTC m=+0.283104267 container died 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 01:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-00f3c9bb62f4be22286279384f53763777ef20e457a1da93ee2b4d832f45d808-merged.mount: Deactivated successfully.
Dec  3 01:52:38 compute-0 podman[412216]: 2025-12-03 01:52:38.630489558 +0000 UTC m=+0.355442785 container remove 3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rosalind, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:52:38 compute-0 systemd[1]: libpod-conmon-3b4df413dbdba7f0449010275c8f302a920d0be698d53112d59405192e1fd645.scope: Deactivated successfully.
Dec  3 01:52:38 compute-0 podman[412255]: 2025-12-03 01:52:38.903382336 +0000 UTC m=+0.087180467 container create 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:52:38 compute-0 podman[412255]: 2025-12-03 01:52:38.86412925 +0000 UTC m=+0.047927431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:52:38 compute-0 systemd[1]: Started libpod-conmon-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope.
Dec  3 01:52:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:52:39 compute-0 podman[412255]: 2025-12-03 01:52:39.08560768 +0000 UTC m=+0.269405851 container init 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:52:39 compute-0 podman[412255]: 2025-12-03 01:52:39.114855784 +0000 UTC m=+0.298653925 container start 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 01:52:39 compute-0 podman[412255]: 2025-12-03 01:52:39.124044143 +0000 UTC m=+0.307842304 container attach 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:52:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:40 compute-0 loving_mahavira[412271]: {
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "osd_id": 2,
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "type": "bluestore"
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:    },
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "osd_id": 1,
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "type": "bluestore"
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:    },
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "osd_id": 0,
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:        "type": "bluestore"
Dec  3 01:52:40 compute-0 loving_mahavira[412271]:    }
Dec  3 01:52:40 compute-0 loving_mahavira[412271]: }
Dec  3 01:52:40 compute-0 systemd[1]: libpod-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope: Deactivated successfully.
Dec  3 01:52:40 compute-0 podman[412255]: 2025-12-03 01:52:40.31120628 +0000 UTC m=+1.495004411 container died 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:52:40 compute-0 systemd[1]: libpod-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope: Consumed 1.192s CPU time.
Dec  3 01:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb09da811caea332bf9b74bb1e79207fc2d0ca6d91c669d4136b57eb5e5612f3-merged.mount: Deactivated successfully.
Dec  3 01:52:40 compute-0 podman[412255]: 2025-12-03 01:52:40.415513038 +0000 UTC m=+1.599311139 container remove 732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:52:40 compute-0 systemd[1]: libpod-conmon-732b101fe5163e0387d292be9eef51740d27b20afcff35e06800ede152f2af8b.scope: Deactivated successfully.
Dec  3 01:52:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:52:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:52:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:52:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:52:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 76d17026-e665-45a4-8b46-989a3895bad4 does not exist
Dec  3 01:52:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 622e96ae-8d1c-431e-8bc6-948d454a791f does not exist
Dec  3 01:52:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:52:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:52:41 compute-0 podman[412366]: 2025-12-03 01:52:41.890640317 +0000 UTC m=+0.132553025 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec  3 01:52:41 compute-0 podman[412367]: 2025-12-03 01:52:41.891785719 +0000 UTC m=+0.122295766 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:52:41 compute-0 podman[412365]: 2025-12-03 01:52:41.951932814 +0000 UTC m=+0.198545895 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 01:52:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:43 compute-0 podman[412427]: 2025-12-03 01:52:43.885720466 +0000 UTC m=+0.132653418 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:52:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:52:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583600602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:52:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:52:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583600602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:52:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.604 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.604 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.605 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:52:51 compute-0 nova_compute[351485]: 2025-12-03 01:52:51.637 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:52:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.625 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:52:54 compute-0 nova_compute[351485]: 2025-12-03 01:52:54.626 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:52:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:52:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734165423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.098 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.665 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.666 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4552MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.666 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.667 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.757 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.757 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:52:55 compute-0 nova_compute[351485]: 2025-12-03 01:52:55.782 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:52:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:52:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014041720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.324 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.335 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.359 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.361 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:52:56 compute-0 nova_compute[351485]: 2025-12-03 01:52:56.361 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:52:57 compute-0 nova_compute[351485]: 2025-12-03 01:52:57.361 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:57 compute-0 nova_compute[351485]: 2025-12-03 01:52:57.361 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:57 compute-0 nova_compute[351485]: 2025-12-03 01:52:57.361 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:52:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:52:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:52:58 compute-0 nova_compute[351485]: 2025-12-03 01:52:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:58 compute-0 podman[412495]: 2025-12-03 01:52:58.859472341 +0000 UTC m=+0.106764409 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 01:52:58 compute-0 podman[412496]: 2025-12-03 01:52:58.901079523 +0000 UTC m=+0.144402289 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  3 01:52:58 compute-0 podman[412497]: 2025-12-03 01:52:58.901224797 +0000 UTC m=+0.137301299 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:52:59 compute-0 nova_compute[351485]: 2025-12-03 01:52:59.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:52:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:52:59.618 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:52:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:52:59.618 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:52:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:52:59.619 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:52:59 compute-0 podman[158098]: time="2025-12-03T01:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:52:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:52:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8114 "" "Go-http-client/1.1"
Dec  3 01:53:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:53:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:53:01 compute-0 openstack_network_exporter[368278]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:53:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:53:01 compute-0 nova_compute[351485]: 2025-12-03 01:53:01.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:01 compute-0 nova_compute[351485]: 2025-12-03 01:53:01.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:53:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:04 compute-0 podman[412556]: 2025-12-03 01:53:04.876719491 +0000 UTC m=+0.126906076 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:53:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:07 compute-0 podman[412575]: 2025-12-03 01:53:07.841879629 +0000 UTC m=+0.109848126 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 01:53:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:12 compute-0 podman[412598]: 2025-12-03 01:53:12.8814684 +0000 UTC m=+0.114161647 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:53:12 compute-0 podman[412597]: 2025-12-03 01:53:12.893138369 +0000 UTC m=+0.136400394 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:53:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:12 compute-0 podman[412596]: 2025-12-03 01:53:12.928286749 +0000 UTC m=+0.183204582 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 01:53:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:14 compute-0 podman[412658]: 2025-12-03 01:53:14.813626816 +0000 UTC m=+0.103616950 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:53:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.914745) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797914789, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1261, "num_deletes": 505, "total_data_size": 1478612, "memory_usage": 1506448, "flush_reason": "Manual Compaction"}
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797930104, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1240902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22941, "largest_seqno": 24201, "table_properties": {"data_size": 1235671, "index_size": 2179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14687, "raw_average_key_size": 18, "raw_value_size": 1223034, "raw_average_value_size": 1570, "num_data_blocks": 98, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726698, "oldest_key_time": 1764726698, "file_creation_time": 1764726797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 15477 microseconds, and 7809 cpu microseconds.
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.930198) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1240902 bytes OK
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.930246) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.933989) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.934013) EVENT_LOG_v1 {"time_micros": 1764726797934006, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.934035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1471824, prev total WAL file size 1471824, number of live WAL files 2.
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.937222) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1211KB)], [53(9035KB)]
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797937303, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10493208, "oldest_snapshot_seqno": -1}
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4464 keys, 7334291 bytes, temperature: kUnknown
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797994060, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7334291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7304375, "index_size": 17646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 111723, "raw_average_key_size": 25, "raw_value_size": 7223409, "raw_average_value_size": 1618, "num_data_blocks": 736, "num_entries": 4464, "num_filter_entries": 4464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.994353) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7334291 bytes
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.997118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.6 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.8 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(14.4) write-amplify(5.9) OK, records in: 5470, records dropped: 1006 output_compression: NoCompression
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.997153) EVENT_LOG_v1 {"time_micros": 1764726797997134, "job": 28, "event": "compaction_finished", "compaction_time_micros": 56843, "compaction_time_cpu_micros": 35750, "output_level": 6, "num_output_files": 1, "total_output_size": 7334291, "num_input_records": 5470, "num_output_records": 4464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:53:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726797997775, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726798001228, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:17.936159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:53:18 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:53:18.001686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:53:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  3 01:53:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  3 01:53:18 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.503 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.503 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.504 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.509 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.516 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e9ae73b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.523 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.529 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.530 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.531 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:53:19.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:53:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  3 01:53:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  3 01:53:19 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  3 01:53:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  3 01:53:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  3 01:53:20 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  3 01:53:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 8.0 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.3 MiB/s wr, 5 op/s
Dec  3 01:53:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 16 MiB data, 156 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Dec  3 01:53:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.2 MiB/s wr, 15 op/s
Dec  3 01:53:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  3 01:53:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  3 01:53:27 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:53:28
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', 'backups', 'default.rgw.control']
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:53:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:53:29 compute-0 podman[158098]: time="2025-12-03T01:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:53:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:53:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Dec  3 01:53:29 compute-0 podman[412685]: 2025-12-03 01:53:29.871900375 +0000 UTC m=+0.131803934 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:53:29 compute-0 podman[412686]: 2025-12-03 01:53:29.885310523 +0000 UTC m=+0.130381414 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 01:53:29 compute-0 podman[412687]: 2025-12-03 01:53:29.917479319 +0000 UTC m=+0.142365572 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:53:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 892 KiB/s wr, 8 op/s
Dec  3 01:53:31 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:31.191 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 01:53:31 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:31.193 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 01:53:31 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:31.195 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:53:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:53:31 compute-0 openstack_network_exporter[368278]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:53:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:53:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec  3 01:53:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 204 B/s wr, 0 op/s
Dec  3 01:53:35 compute-0 podman[412740]: 2025-12-03 01:53:35.883481243 +0000 UTC m=+0.133468081 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 01:53:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:53:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:53:38 compute-0 podman[412758]: 2025-12-03 01:53:38.883670639 +0000 UTC m=+0.140418257 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=edpm)
Dec  3 01:53:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:53:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 015178ae-2471-47b1-b3ba-26591ba635a1 does not exist
Dec  3 01:53:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9c95dc75-4c30-4d2f-b515-39406adc2265 does not exist
Dec  3 01:53:42 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 35839085-16f6-49bd-8a42-9bf4a4a19cd6 does not exist
Dec  3 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:53:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:53:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:53:42 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:53:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.192265878 +0000 UTC m=+0.082919287 container create f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.153490785 +0000 UTC m=+0.044144234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:53:43 compute-0 systemd[1]: Started libpod-conmon-f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab.scope.
Dec  3 01:53:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.318290489 +0000 UTC m=+0.208943908 container init f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.33536827 +0000 UTC m=+0.226021649 container start f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:53:43 compute-0 cranky_goldstine[413076]: 167 167
Dec  3 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.345262748 +0000 UTC m=+0.235916117 container attach f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:53:43 compute-0 systemd[1]: libpod-f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab.scope: Deactivated successfully.
Dec  3 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.346160044 +0000 UTC m=+0.236813443 container died f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d4dd9b41a50b79e88e40f1f26944fe2ef5cbfdb1fa8def4f73d935e38079dbd-merged.mount: Deactivated successfully.
Dec  3 01:53:43 compute-0 podman[413049]: 2025-12-03 01:53:43.417583396 +0000 UTC m=+0.308236765 container remove f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldstine, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 01:53:43 compute-0 podman[413064]: 2025-12-03 01:53:43.423150823 +0000 UTC m=+0.163723634 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, vendor=Red Hat, Inc.)
Dec  3 01:53:43 compute-0 podman[413065]: 2025-12-03 01:53:43.43051026 +0000 UTC m=+0.156026307 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd)
Dec  3 01:53:43 compute-0 podman[413061]: 2025-12-03 01:53:43.430686215 +0000 UTC m=+0.175651300 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 01:53:43 compute-0 systemd[1]: libpod-conmon-f3491060eee29963b8de91c1bd560e991a9a0e7e3c610494711c6844a9ea20ab.scope: Deactivated successfully.
Dec  3 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.623122577 +0000 UTC m=+0.066903196 container create c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.597415943 +0000 UTC m=+0.041196672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:53:43 compute-0 systemd[1]: Started libpod-conmon-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope.
Dec  3 01:53:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.795409161 +0000 UTC m=+0.239189820 container init c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.81099374 +0000 UTC m=+0.254774389 container start c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:53:43 compute-0 podman[413147]: 2025-12-03 01:53:43.817522714 +0000 UTC m=+0.261303423 container attach c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:53:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:45 compute-0 elastic_hermann[413163]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:53:45 compute-0 elastic_hermann[413163]: --> relative data size: 1.0
Dec  3 01:53:45 compute-0 elastic_hermann[413163]: --> All data devices are unavailable
Dec  3 01:53:45 compute-0 systemd[1]: libpod-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope: Deactivated successfully.
Dec  3 01:53:45 compute-0 systemd[1]: libpod-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope: Consumed 1.278s CPU time.
Dec  3 01:53:45 compute-0 podman[413147]: 2025-12-03 01:53:45.156276391 +0000 UTC m=+1.600057050 container died c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4400428c470b060b60ab4936a69e9ee3834d25d795aa2e9790a174c19514aa79-merged.mount: Deactivated successfully.
Dec  3 01:53:45 compute-0 podman[413147]: 2025-12-03 01:53:45.228364452 +0000 UTC m=+1.672145071 container remove c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:53:45 compute-0 systemd[1]: libpod-conmon-c4768658ef0dcc9552e0a85bae98d01cebd3c745a05f0c4db5c268e515ac61c8.scope: Deactivated successfully.
Dec  3 01:53:45 compute-0 podman[413195]: 2025-12-03 01:53:45.321762343 +0000 UTC m=+0.125863477 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:53:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.351797493 +0000 UTC m=+0.095004398 container create b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.317377623 +0000 UTC m=+0.060584588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:53:46 compute-0 systemd[1]: Started libpod-conmon-b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b.scope.
Dec  3 01:53:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.49968684 +0000 UTC m=+0.242893745 container init b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.518254663 +0000 UTC m=+0.261461568 container start b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.525062905 +0000 UTC m=+0.268269870 container attach b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:53:46 compute-0 kind_mclean[413381]: 167 167
Dec  3 01:53:46 compute-0 systemd[1]: libpod-b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b.scope: Deactivated successfully.
Dec  3 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.532682569 +0000 UTC m=+0.275889484 container died b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64ff86d6cc5afa97825f15a84a6f40508d1df2f3cd0e40c4b8063e10d1c5622-merged.mount: Deactivated successfully.
Dec  3 01:53:46 compute-0 podman[413366]: 2025-12-03 01:53:46.597203507 +0000 UTC m=+0.340410392 container remove b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:53:46 compute-0 systemd[1]: libpod-conmon-b21206a3b6021ea303e38a6a2f4445f641941a85651d3198f12976431fcbfb8b.scope: Deactivated successfully.
Dec  3 01:53:46 compute-0 podman[413403]: 2025-12-03 01:53:46.865779364 +0000 UTC m=+0.095317767 container create 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:53:46 compute-0 podman[413403]: 2025-12-03 01:53:46.826473386 +0000 UTC m=+0.056011879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:53:46 compute-0 systemd[1]: Started libpod-conmon-0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1.scope.
Dec  3 01:53:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.050126128 +0000 UTC m=+0.279664601 container init 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.07046494 +0000 UTC m=+0.300003373 container start 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.077778137 +0000 UTC m=+0.307316620 container attach 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:53:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:53:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498895008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:53:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:53:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498895008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:53:47 compute-0 recursing_lewin[413419]: {
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:    "0": [
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:        {
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "devices": [
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "/dev/loop3"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            ],
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_name": "ceph_lv0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_size": "21470642176",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "name": "ceph_lv0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "tags": {
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cluster_name": "ceph",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.crush_device_class": "",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.encrypted": "0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osd_id": "0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.type": "block",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.vdo": "0"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            },
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "type": "block",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "vg_name": "ceph_vg0"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:        }
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:    ],
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:    "1": [
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:        {
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "devices": [
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "/dev/loop4"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            ],
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_name": "ceph_lv1",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_size": "21470642176",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "name": "ceph_lv1",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "tags": {
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cluster_name": "ceph",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.crush_device_class": "",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.encrypted": "0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osd_id": "1",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.type": "block",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.vdo": "0"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            },
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "type": "block",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "vg_name": "ceph_vg1"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:        }
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:    ],
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:    "2": [
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:        {
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "devices": [
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "/dev/loop5"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            ],
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_name": "ceph_lv2",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_size": "21470642176",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "name": "ceph_lv2",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "tags": {
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.cluster_name": "ceph",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.crush_device_class": "",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.encrypted": "0",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osd_id": "2",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.type": "block",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:                "ceph.vdo": "0"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            },
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "type": "block",
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:            "vg_name": "ceph_vg2"
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:        }
Dec  3 01:53:47 compute-0 recursing_lewin[413419]:    ]
Dec  3 01:53:47 compute-0 recursing_lewin[413419]: }
Dec  3 01:53:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:47 compute-0 systemd[1]: libpod-0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1.scope: Deactivated successfully.
Dec  3 01:53:47 compute-0 podman[413403]: 2025-12-03 01:53:47.964412606 +0000 UTC m=+1.193951039 container died 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3295690af6102fb22464a1fcf17d612f5ef70d5697dbea8a4000ab59be34a5ad-merged.mount: Deactivated successfully.
Dec  3 01:53:48 compute-0 podman[413403]: 2025-12-03 01:53:48.074097037 +0000 UTC m=+1.303635470 container remove 0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lewin, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:53:48 compute-0 systemd[1]: libpod-conmon-0b252b090bea735362771b0a6a824bb736a9f61621416db134cf44bdfb65d9d1.scope: Deactivated successfully.
Dec  3 01:53:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.316051676 +0000 UTC m=+0.094316658 container create d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.279337422 +0000 UTC m=+0.057602414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:53:49 compute-0 systemd[1]: Started libpod-conmon-d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902.scope.
Dec  3 01:53:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.463896972 +0000 UTC m=+0.242162004 container init d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.480118459 +0000 UTC m=+0.258383431 container start d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 01:53:49 compute-0 infallible_cray[413593]: 167 167
Dec  3 01:53:49 compute-0 systemd[1]: libpod-d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902.scope: Deactivated successfully.
Dec  3 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.494646138 +0000 UTC m=+0.272911160 container attach d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.495680227 +0000 UTC m=+0.273945229 container died d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4670849a67fa2ed137219905757d53bef962c02e53f570fd71392f2337ff4001-merged.mount: Deactivated successfully.
Dec  3 01:53:49 compute-0 podman[413577]: 2025-12-03 01:53:49.575929508 +0000 UTC m=+0.354194480 container remove d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:53:49 compute-0 systemd[1]: libpod-conmon-d2769998629204ac0d51618bc2e4b237890e018a7cdb60b9c0b13ec05caf9902.scope: Deactivated successfully.
Dec  3 01:53:49 compute-0 podman[413616]: 2025-12-03 01:53:49.882361541 +0000 UTC m=+0.094600556 container create ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:53:49 compute-0 podman[413616]: 2025-12-03 01:53:49.848203349 +0000 UTC m=+0.060442424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:53:49 compute-0 systemd[1]: Started libpod-conmon-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope.
Dec  3 01:53:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:53:50 compute-0 podman[413616]: 2025-12-03 01:53:50.032275285 +0000 UTC m=+0.244514390 container init ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:53:50 compute-0 podman[413616]: 2025-12-03 01:53:50.063020721 +0000 UTC m=+0.275259756 container start ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:53:50 compute-0 podman[413616]: 2025-12-03 01:53:50.06971873 +0000 UTC m=+0.281957755 container attach ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:53:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:50 compute-0 nova_compute[351485]: 2025-12-03 01:53:50.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]: {
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "osd_id": 2,
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "type": "bluestore"
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:    },
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "osd_id": 1,
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "type": "bluestore"
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:    },
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "osd_id": 0,
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:        "type": "bluestore"
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]:    }
Dec  3 01:53:51 compute-0 clever_chandrasekhar[413632]: }
Dec  3 01:53:51 compute-0 systemd[1]: libpod-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope: Deactivated successfully.
Dec  3 01:53:51 compute-0 podman[413616]: 2025-12-03 01:53:51.344152206 +0000 UTC m=+1.556391281 container died ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:53:51 compute-0 systemd[1]: libpod-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope: Consumed 1.276s CPU time.
Dec  3 01:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-67674bb94bbc39e617441784dc74b6d0e2e2fd61b2eae88b9ed55a53cf3b8cbe-merged.mount: Deactivated successfully.
Dec  3 01:53:51 compute-0 podman[413616]: 2025-12-03 01:53:51.458324652 +0000 UTC m=+1.670563657 container remove ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:53:51 compute-0 systemd[1]: libpod-conmon-ecee97ad8f66370b0977b902f005e3129961ad5f1c8461c6b0cd04e43d04c539.scope: Deactivated successfully.
Dec  3 01:53:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:53:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:53:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:53:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:53:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e60e27d3-f16e-429f-b5df-3d90e1fe752b does not exist
Dec  3 01:53:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 19f40171-6d0e-4543-a301-e30adc95a6f0 does not exist
Dec  3 01:53:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:53:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:53:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:53:53 compute-0 nova_compute[351485]: 2025-12-03 01:53:53.601 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 01:53:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.620 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:53:54 compute-0 nova_compute[351485]: 2025-12-03 01:53:54.620 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:53:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:53:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3153023216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.101 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.648 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.650 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4535MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.651 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.652 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.745 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.746 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:53:55 compute-0 nova_compute[351485]: 2025-12-03 01:53:55.768 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:53:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:53:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429369695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.312 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.322 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.350 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.352 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:53:56 compute-0 nova_compute[351485]: 2025-12-03 01:53:56.352 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:53:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:53:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.353 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.353 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:53:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:58 compute-0 nova_compute[351485]: 2025-12-03 01:53:58.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:53:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:59.619 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:53:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:59.619 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:53:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:53:59.620 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:53:59 compute-0 podman[158098]: time="2025-12-03T01:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:53:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:53:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8127 "" "Go-http-client/1.1"
Dec  3 01:54:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:00 compute-0 podman[413771]: 2025-12-03 01:54:00.87096081 +0000 UTC m=+0.106558243 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:54:00 compute-0 podman[413769]: 2025-12-03 01:54:00.871153166 +0000 UTC m=+0.121121214 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:54:00 compute-0 podman[413770]: 2025-12-03 01:54:00.888294239 +0000 UTC m=+0.130930400 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.license=GPLv2)
Dec  3 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:54:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:54:01 compute-0 openstack_network_exporter[368278]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:54:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:54:01 compute-0 nova_compute[351485]: 2025-12-03 01:54:01.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:54:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:03 compute-0 nova_compute[351485]: 2025-12-03 01:54:03.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:54:03 compute-0 nova_compute[351485]: 2025-12-03 01:54:03.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:54:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:06 compute-0 podman[413829]: 2025-12-03 01:54:06.901651697 +0000 UTC m=+0.148665569 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:54:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:09 compute-0 podman[413848]: 2025-12-03 01:54:09.874850594 +0000 UTC m=+0.122604465 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm)
Dec  3 01:54:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:13 compute-0 podman[413868]: 2025-12-03 01:54:13.881076074 +0000 UTC m=+0.124642422 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350)
Dec  3 01:54:13 compute-0 podman[413869]: 2025-12-03 01:54:13.929724125 +0000 UTC m=+0.164548077 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 01:54:13 compute-0 podman[413867]: 2025-12-03 01:54:13.947899147 +0000 UTC m=+0.195740336 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:54:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:15 compute-0 podman[413929]: 2025-12-03 01:54:15.837814522 +0000 UTC m=+0.085778877 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:54:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:18.906 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 01:54:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:18.907 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 01:54:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:20.911 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:54:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:54:28
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.log']
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:54:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:54:29 compute-0 podman[158098]: time="2025-12-03T01:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:54:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 01:54:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8135 "" "Go-http-client/1.1"
Dec  3 01:54:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:54:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:54:31 compute-0 openstack_network_exporter[368278]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:54:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.593 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.593 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.617 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.729 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.730 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.744 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.744 351492 INFO nova.compute.claims [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 01:54:31 compute-0 podman[413957]: 2025-12-03 01:54:31.864142411 +0000 UTC m=+0.102040805 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:54:31 compute-0 podman[413955]: 2025-12-03 01:54:31.870324995 +0000 UTC m=+0.128530321 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:54:31 compute-0 nova_compute[351485]: 2025-12-03 01:54:31.875 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:31 compute-0 podman[413956]: 2025-12-03 01:54:31.887809498 +0000 UTC m=+0.132552615 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true)
Dec  3 01:54:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:54:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130994907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.384 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.401 351492 DEBUG nova.compute.provider_tree [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.441 351492 DEBUG nova.scheduler.client.report [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.468 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.469 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.513 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.514 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.541 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.587 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.723 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.726 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.726 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Creating image(s)#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.765 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.822 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.892 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.902 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:32 compute-0 nova_compute[351485]: 2025-12-03 01:54:32.904 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:33 compute-0 nova_compute[351485]: 2025-12-03 01:54:33.156 351492 DEBUG nova.virt.libvirt.imagebackend [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 01:54:33 compute-0 nova_compute[351485]: 2025-12-03 01:54:33.965 351492 WARNING oslo_policy.policy [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  3 01:54:33 compute-0 nova_compute[351485]: 2025-12-03 01:54:33.966 351492 WARNING oslo_policy.policy [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  3 01:54:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.354 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.458 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.460 351492 DEBUG nova.virt.images [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] 466cf0db-c3be-4d70-b9f3-08c056c2cad9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.463 351492 DEBUG nova.privsep.utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.464 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.756 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.part /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.765 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.860 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8.converted --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.863 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.922 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:35 compute-0 nova_compute[351485]: 2025-12-03 01:54:35.933 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  3 01:54:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  3 01:54:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  3 01:54:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 773 KiB/s rd, 1 op/s
Dec  3 01:54:36 compute-0 nova_compute[351485]: 2025-12-03 01:54:36.337 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Successfully created port: d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 01:54:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  3 01:54:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  3 01:54:37 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  3 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.564 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.631 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Successfully updated port: d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.717 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.717 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.718 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.749 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 01:54:37 compute-0 podman[414175]: 2025-12-03 01:54:37.882982645 +0000 UTC m=+0.130109827 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 01:54:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:37 compute-0 nova_compute[351485]: 2025-12-03 01:54:37.976 351492 DEBUG nova.objects.instance [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.052 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.110 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.118 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.119 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.120 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.150 351492 DEBUG nova.compute.manager [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.151 351492 DEBUG nova.compute.manager [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing instance network info cache due to event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.161 351492 DEBUG oslo_concurrency.lockutils [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.166 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.166 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 966 KiB/s rd, 2 op/s
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.217 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.219 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.261 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.269 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:54:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:54:38 compute-0 nova_compute[351485]: 2025-12-03 01:54:38.966 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.257 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.989s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.527 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.528 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Ensure instance console log exists: /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.530 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.530 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:39 compute-0 nova_compute[351485]: 2025-12-03 01:54:39.531 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 35 MiB data, 172 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 985 KiB/s wr, 40 op/s
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.696 351492 DEBUG nova.network.neutron [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.727 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.728 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance network_info: |[{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.728 351492 DEBUG oslo_concurrency.lockutils [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.729 351492 DEBUG nova.network.neutron [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.735 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start _get_guest_xml network_info=[{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.749 351492 WARNING nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.766 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.767 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.778 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.779 351492 DEBUG nova.virt.libvirt.host [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.780 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.781 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.782 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.783 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.784 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.785 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.785 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.786 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.787 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.788 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.788 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.789 351492 DEBUG nova.virt.hardware [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.797 351492 DEBUG nova.privsep.utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 01:54:40 compute-0 nova_compute[351485]: 2025-12-03 01:54:40.799 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:40 compute-0 podman[414364]: 2025-12-03 01:54:40.898342976 +0000 UTC m=+0.150505146 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543)
Dec  3 01:54:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 01:54:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606851180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.286 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.287 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 01:54:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623815996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.745 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.793 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:41 compute-0 nova_compute[351485]: 2025-12-03 01:54:41.809 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 56 op/s
Dec  3 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 01:54:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938135160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.321 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.322 351492 DEBUG nova.virt.libvirt.vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:54:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-2j005007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:54:32Z,user_data=None,user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=9182286b-5a08-4961-b4bb-c0e2f05746f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.323 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.323 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.325 351492 DEBUG nova.objects.instance [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.345 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] End _get_guest_xml xml=<domain type="kvm">
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <uuid>9182286b-5a08-4961-b4bb-c0e2f05746f7</uuid>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <name>instance-00000001</name>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <memory>524288</memory>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <metadata>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <nova:name>test_0</nova:name>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 01:54:40</nova:creationTime>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <nova:flavor name="m1.small">
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:memory>512</nova:memory>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <nova:port uuid="d2a50b9b-c23e-4e96-a247-ba01de01a3f1">
Dec  3 01:54:42 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="192.168.0.5" ipVersion="4"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  </metadata>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <system>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <entry name="serial">9182286b-5a08-4961-b4bb-c0e2f05746f7</entry>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <entry name="uuid">9182286b-5a08-4961-b4bb-c0e2f05746f7</entry>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </system>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <os>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  </os>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <features>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <apic/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  </features>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  </clock>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  </cpu>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  <devices>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/9182286b-5a08-4961-b4bb-c0e2f05746f7_disk">
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </source>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </auth>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.eph0">
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </source>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </auth>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <target dev="vdb" bus="virtio"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config">
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </source>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 01:54:42 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      </auth>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:8f:a6:32"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <target dev="tapd2a50b9b-c2"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </interface>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/console.log" append="off"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </serial>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <video>
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </video>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </rng>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 01:54:42 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 01:54:42 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 01:54:42 compute-0 nova_compute[351485]:  </devices>
Dec  3 01:54:42 compute-0 nova_compute[351485]: </domain>
Dec  3 01:54:42 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Preparing to wait for external event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.346 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.347 351492 DEBUG nova.virt.libvirt.vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:54:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-2j005007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:54:32Z,user_data=None,user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=9182286b-5a08-4961-b4bb-c0e2f05746f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.347 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.348 351492 DEBUG nova.network.os_vif_util [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.348 351492 DEBUG os_vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.387 351492 DEBUG ovsdbapp.backend.ovs_idl [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.387 351492 DEBUG ovsdbapp.backend.ovs_idl [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.387 351492 DEBUG ovsdbapp.backend.ovs_idl [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.388 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.389 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.389 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.415 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.416 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.416 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.418 351492 INFO oslo.privsep.daemon [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpftcxxza1/privsep.sock']#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.729 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  3 01:54:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  3 01:54:42 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.993 351492 DEBUG nova.network.neutron [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated VIF entry in instance network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 01:54:42 compute-0 nova_compute[351485]: 2025-12-03 01:54:42.998 351492 DEBUG nova.network.neutron [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.017 351492 DEBUG oslo_concurrency.lockutils [req-c58eed6d-18c9-472c-9087-58f160e834bb req-6604b7a8-03a5-40ed-b279-cdde1ad18b26 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.193 351492 INFO oslo.privsep.daemon [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.056 414469 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.064 414469 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.070 414469 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.070 414469 INFO oslo.privsep.daemon [-] privsep daemon running as pid 414469#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.581 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.581 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2a50b9b-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.582 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2a50b9b-c2, col_values=(('external_ids', {'iface-id': 'd2a50b9b-c23e-4e96-a247-ba01de01a3f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:a6:32', 'vm-uuid': '9182286b-5a08-4961-b4bb-c0e2f05746f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:54:43 compute-0 NetworkManager[48912]: <info>  [1764726883.5852] manager: (tapd2a50b9b-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.586 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.596 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.598 351492 INFO os_vif [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2')#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.686 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.686 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.687 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.687 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:8f:a6:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.688 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Using config drive#033[00m
Dec  3 01:54:43 compute-0 nova_compute[351485]: 2025-12-03 01:54:43.770 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 63 op/s
Dec  3 01:54:44 compute-0 podman[414495]: 2025-12-03 01:54:44.822731966 +0000 UTC m=+0.098815894 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 01:54:44 compute-0 podman[414494]: 2025-12-03 01:54:44.833569965 +0000 UTC m=+0.120074340 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64)
Dec  3 01:54:44 compute-0 podman[414493]: 2025-12-03 01:54:44.864003011 +0000 UTC m=+0.158177004 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.148 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Creating config drive at /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config#033[00m
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.161 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa5bt4xdf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 941 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.300 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa5bt4xdf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.362 351492 DEBUG nova.storage.rbd_utils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.372 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.660 351492 DEBUG oslo_concurrency.processutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config 9182286b-5a08-4961-b4bb-c0e2f05746f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.661 351492 INFO nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deleting local config drive /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.config because it was imported into RBD.#033[00m
Dec  3 01:54:46 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 01:54:46 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 01:54:46 compute-0 podman[414599]: 2025-12-03 01:54:46.836102763 +0000 UTC m=+0.121584783 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:54:46 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  3 01:54:46 compute-0 kernel: tapd2a50b9b-c2: entered promiscuous mode
Dec  3 01:54:46 compute-0 NetworkManager[48912]: <info>  [1764726886.8705] manager: (tapd2a50b9b-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Dec  3 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00027|binding|INFO|Claiming lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for this chassis.
Dec  3 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00028|binding|INFO|d2a50b9b-c23e-4e96-a247-ba01de01a3f1: Claiming fa:16:3e:8f:a6:32 192.168.0.5
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.874 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.879 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.893 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:a6:32 192.168.0.5'], port_security=['fa:16:3e:8f:a6:32 192.168.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.5/24', 'neutron:device_id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d2a50b9b-c23e-4e96-a247-ba01de01a3f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.894 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis#033[00m
Dec  3 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.896 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d#033[00m
Dec  3 01:54:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:46.898 288528 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp2pmbp4iw/privsep.sock']#033[00m
Dec  3 01:54:46 compute-0 systemd-udevd[414657]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 01:54:46 compute-0 NetworkManager[48912]: <info>  [1764726886.9344] device (tapd2a50b9b-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 01:54:46 compute-0 NetworkManager[48912]: <info>  [1764726886.9354] device (tapd2a50b9b-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 01:54:46 compute-0 systemd-machined[138558]: New machine qemu-1-instance-00000001.
Dec  3 01:54:46 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.977 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00029|binding|INFO|Setting lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 ovn-installed in OVS
Dec  3 01:54:46 compute-0 ovn_controller[89134]: 2025-12-03T01:54:46Z|00030|binding|INFO|Setting lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 up in Southbound
Dec  3 01:54:46 compute-0 nova_compute[351485]: 2025-12-03 01:54:46.989 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 01:54:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:54:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2420847559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:54:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:54:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2420847559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.598 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726887.5981455, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.599 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Started (Lifecycle Event)#033[00m
Dec  3 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.647 288528 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.649 288528 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2pmbp4iw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  3 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.515 414755 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.523 414755 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.527 414755 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  3 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.528 414755 INFO oslo.privsep.daemon [-] privsep daemon running as pid 414755#033[00m
Dec  3 01:54:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:47.653 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[9095c4c0-ecef-4c8d-ab53-aee7eae29338]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.687 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.693 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726887.5982256, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.693 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Paused (Lifecycle Event)#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.720 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.727 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.732 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.748 351492 DEBUG nova.compute.manager [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.748 351492 DEBUG oslo_concurrency.lockutils [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.749 351492 DEBUG oslo_concurrency.lockutils [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.749 351492 DEBUG oslo_concurrency.lockutils [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.749 351492 DEBUG nova.compute.manager [req-6cbb4c8d-55bf-472b-8591-d52521905002 req-a1f05d09-f816-405c-a222-5d37f158cae2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Processing event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.750 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.752 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.758 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726887.7577085, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.758 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Resumed (Lifecycle Event)#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.762 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.775 351492 INFO nova.virt.libvirt.driver [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance spawned successfully.#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.775 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.781 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.794 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.805 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.806 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.807 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.808 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.808 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.809 351492 DEBUG nova.virt.libvirt.driver [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.816 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.867 351492 INFO nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 15.14 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.868 351492 DEBUG nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.936 351492 INFO nova.compute.manager [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 16.24 seconds to build instance.#033[00m
Dec  3 01:54:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:47 compute-0 nova_compute[351485]: 2025-12-03 01:54:47.954 351492 DEBUG oslo_concurrency.lockutils [None req-f74f175e-4ae4-45ad-8e71-3eac9b810f4f 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.206 414755 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.206 414755 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.206 414755 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 854 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Dec  3 01:54:48 compute-0 nova_compute[351485]: 2025-12-03 01:54:48.585 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.857 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f8184ea0-e0dd-4510-bb82-d843f0a535a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.859 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ba11691-21 in ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.861 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ba11691-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.861 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ae1e41-795c-4b25-8e64-ab17ebd3d79e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.865 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6d808e-3491-45c8-a123-d7995c480be4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.895 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[900b18ce-eed7-4a13-8b71-3e96d272d4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.937 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[787facbe-6165-4480-bbba-2f4ed0ba4f03]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:48.940 288528 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp9j9wa7zp/privsep.sock']#033[00m
Dec  3 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.696 288528 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.698 288528 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9j9wa7zp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  3 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.554 414771 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.558 414771 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.560 414771 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  3 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.561 414771 INFO oslo.privsep.daemon [-] privsep daemon running as pid 414771#033[00m
Dec  3 01:54:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:49.702 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[68b80d36-222f-4d8b-af57-fbd9e7dde86a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.827 351492 DEBUG nova.compute.manager [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.828 351492 DEBUG oslo_concurrency.lockutils [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.828 351492 DEBUG oslo_concurrency.lockutils [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.829 351492 DEBUG oslo_concurrency.lockutils [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.830 351492 DEBUG nova.compute.manager [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] No waiting events found dispatching network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 01:54:49 compute-0 nova_compute[351485]: 2025-12-03 01:54:49.831 351492 WARNING nova.compute.manager [req-5e86aa50-39e7-4cca-94cc-ff42ad31dfc2 req-ea646ff0-5dc2-4003-b650-fb75e0aa7c30 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received unexpected event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for instance with vm_state active and task_state None.#033[00m
Dec  3 01:54:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 881 KiB/s wr, 33 op/s
Dec  3 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.254 414771 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.254 414771 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.254 414771 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.858 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[aeffe574-299a-4e8f-a9f8-adf1274fcf4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:50 compute-0 NetworkManager[48912]: <info>  [1764726890.9057] manager: (tap7ba11691-20): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec  3 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.904 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a6455f5b-0a3d-4efc-9f2e-8d3c6efc352c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:50 compute-0 systemd-udevd[414783]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.966 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[aa297b0a-988e-432e-817d-65bc49062425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:50.970 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4401a42a-99f8-4d92-b077-c72af5d878f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 NetworkManager[48912]: <info>  [1764726891.0047] device (tap7ba11691-20): carrier: link connected
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.013 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e1198794-480a-4cbe-8c33-e64fafc74bba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.039 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb42e5a-f75a-425d-959f-ecd51b1aee8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 19031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414801, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.061 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5924698f-9448-4987-a67b-4f9c7aa43cd4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:a4dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573048, 'tstamp': 573048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414802, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.083 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f383b8-350f-4d6b-9af7-afc5eab7e902]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 19031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414803, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.141 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcc51e9-b3d3-492a-9ee7-64d9125f428e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.232 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[07dc4ea4-7142-443f-b0b7-1ee844b552b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.234 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.234 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.235 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.238 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:51 compute-0 NetworkManager[48912]: <info>  [1764726891.2394] manager: (tap7ba11691-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Dec  3 01:54:51 compute-0 kernel: tap7ba11691-20: entered promiscuous mode
Dec  3 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.244 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.247 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.250 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:51 compute-0 ovn_controller[89134]: 2025-12-03T01:54:51Z|00031|binding|INFO|Releasing lport 8c8945aa-32be-4ced-a7fe-2b9502f30008 from this chassis (sb_readonly=0)
Dec  3 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.252 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.253 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ba11691-2711-476c-9191-cb6dfd0efa7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ba11691-2711-476c-9191-cb6dfd0efa7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.254 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3c79b873-9341-4ce4-9a9c-be3e8f59ef42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.256 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: global
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/7ba11691-2711-476c-9191-cb6dfd0efa7d.pid.haproxy
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID 7ba11691-2711-476c-9191-cb6dfd0efa7d
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 01:54:51 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:51.257 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'env', 'PROCESS_TAG=haproxy-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ba11691-2711-476c-9191-cb6dfd0efa7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 01:54:51 compute-0 nova_compute[351485]: 2025-12-03 01:54:51.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:51 compute-0 podman[414836]: 2025-12-03 01:54:51.838839412 +0000 UTC m=+0.119660498 container create 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 01:54:51 compute-0 podman[414836]: 2025-12-03 01:54:51.781798858 +0000 UTC m=+0.062620004 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 01:54:51 compute-0 systemd[1]: Started libpod-conmon-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28.scope.
Dec  3 01:54:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2b42ad51a1eafabc174b54703a8a7fc40735ce50000101ab3bd4077ab4d5c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 01:54:52 compute-0 podman[414836]: 2025-12-03 01:54:52.011047214 +0000 UTC m=+0.291868320 container init 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:54:52 compute-0 podman[414836]: 2025-12-03 01:54:52.019978738 +0000 UTC m=+0.300799804 container start 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 01:54:52 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : New worker (414906) forked
Dec  3 01:54:52 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : Loading success.
Dec  3 01:54:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 57 op/s
Dec  3 01:54:52 compute-0 nova_compute[351485]: 2025-12-03 01:54:52.733 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:53 compute-0 podman[415037]: 2025-12-03 01:54:53.197274493 +0000 UTC m=+0.117134435 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:54:53 compute-0 podman[415037]: 2025-12-03 01:54:53.327670155 +0000 UTC m=+0.247530067 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:54:53 compute-0 nova_compute[351485]: 2025-12-03 01:54:53.588 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec  3 01:54:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:54:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:54:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:54:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:54:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:54:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.914 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.915 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.916 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 01:54:55 compute-0 nova_compute[351485]: 2025-12-03 01:54:55.916 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:54:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 164b08fd-e4a9-433f-b7b5-13a5862c0112 does not exist
Dec  3 01:54:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 779bcd1f-91c5-404d-87c2-149b5e0097de does not exist
Dec  3 01:54:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 83a6554e-738b-4f41-bf99-6d3f208ad510 does not exist
Dec  3 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:54:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:54:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:54:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec  3 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:54:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.04075719 +0000 UTC m=+0.104441214 container create 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.004068316 +0000 UTC m=+0.067752400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:54:57 compute-0 systemd[1]: Started libpod-conmon-42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a.scope.
Dec  3 01:54:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.196891345 +0000 UTC m=+0.260575359 container init 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.209178365 +0000 UTC m=+0.272862359 container start 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.213972982 +0000 UTC m=+0.277656976 container attach 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:54:57 compute-0 interesting_bell[415478]: 167 167
Dec  3 01:54:57 compute-0 systemd[1]: libpod-42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a.scope: Deactivated successfully.
Dec  3 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.223906004 +0000 UTC m=+0.287590018 container died 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 01:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5f040f5ea34c2cbe8772172a31c6e75ffb0f30d1c51ea565c0abc777b60e8aa-merged.mount: Deactivated successfully.
Dec  3 01:54:57 compute-0 podman[415461]: 2025-12-03 01:54:57.526260672 +0000 UTC m=+0.589944696 container remove 42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:54:57 compute-0 systemd[1]: libpod-conmon-42928d54fcbe99c4a71b6047a5acf53c5d23652cff221811adb44a268f6bba8a.scope: Deactivated successfully.
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.738 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.751 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.776 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.778 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.780 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.782 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.833 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.835 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.835 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.836 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:54:57 compute-0 nova_compute[351485]: 2025-12-03 01:54:57.837 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:57 compute-0 podman[415500]: 2025-12-03 01:54:57.850607736 +0000 UTC m=+0.098628109 container create 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 01:54:57 compute-0 podman[415500]: 2025-12-03 01:54:57.821694542 +0000 UTC m=+0.069715005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:54:57 compute-0 systemd[1]: Started libpod-conmon-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope.
Dec  3 01:54:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:54:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:54:57 compute-0 podman[415500]: 2025-12-03 01:54:57.999516455 +0000 UTC m=+0.247536858 container init 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:54:58 compute-0 podman[415500]: 2025-12-03 01:54:58.02358628 +0000 UTC m=+0.271606683 container start 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 01:54:58 compute-0 podman[415500]: 2025-12-03 01:54:58.030340972 +0000 UTC m=+0.278361365 container attach 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 01:54:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec  3 01:54:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:54:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239002983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.340 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:54:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.445 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.446 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.447 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.593 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.950 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.953 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4068MB free_disk=59.97224044799805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.953 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:58 compute-0 nova_compute[351485]: 2025-12-03 01:54:58.954 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:58 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:58.991 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 01:54:58 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:58.993 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.002 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.073 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.075 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.075 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.127 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:54:59 compute-0 busy_moser[415516]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:54:59 compute-0 busy_moser[415516]: --> relative data size: 1.0
Dec  3 01:54:59 compute-0 busy_moser[415516]: --> All data devices are unavailable
Dec  3 01:54:59 compute-0 systemd[1]: libpod-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope: Deactivated successfully.
Dec  3 01:54:59 compute-0 systemd[1]: libpod-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope: Consumed 1.089s CPU time.
Dec  3 01:54:59 compute-0 podman[415500]: 2025-12-03 01:54:59.198019644 +0000 UTC m=+1.446040047 container died 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 01:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f141437fb3e5f4a426c458dfb90f0f0917cc77a693fa30eceb8f500251203fd-merged.mount: Deactivated successfully.
Dec  3 01:54:59 compute-0 podman[415500]: 2025-12-03 01:54:59.297919268 +0000 UTC m=+1.545939631 container remove 1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:54:59 compute-0 systemd[1]: libpod-conmon-1dbb14fe812e60e5e5cba84f0b21f4586f2c986d2078cf472092a658f77d6aad.scope: Deactivated successfully.
Dec  3 01:54:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:54:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294191349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:54:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:59.620 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:54:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:54:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:54:59.622 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.638 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.655 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.726 351492 ERROR nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [req-41a9209d-63cd-4d67-a0fe-51df9a7d13db] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 107397d2-51bc-4a03-bce4-7cd69319cf05.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-41a9209d-63cd-4d67-a0fe-51df9a7d13db"}]}#033[00m
Dec  3 01:54:59 compute-0 podman[158098]: time="2025-12-03T01:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.750 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 01:54:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.776 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.777 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.797 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 01:54:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8621 "" "Go-http-client/1.1"
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.830 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 01:54:59 compute-0 nova_compute[351485]: 2025-12-03 01:54:59.872 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Dec  3 01:55:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:55:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857368022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.399 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.409 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.455 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updated inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  3 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.456 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  3 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.456 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.480 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:55:00 compute-0 nova_compute[351485]: 2025-12-03 01:55:00.481 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.527555783 +0000 UTC m=+0.087272675 container create 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.491574919 +0000 UTC m=+0.051291861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:55:00 compute-0 systemd[1]: Started libpod-conmon-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope.
Dec  3 01:55:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.65250171 +0000 UTC m=+0.212218692 container init 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.670130932 +0000 UTC m=+0.229847844 container start 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:55:00 compute-0 systemd[1]: libpod-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope: Deactivated successfully.
Dec  3 01:55:00 compute-0 hungry_einstein[415774]: 167 167
Dec  3 01:55:00 compute-0 conmon[415774]: conmon 112bb404c19eadfad88b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope/container/memory.events
Dec  3 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.683724019 +0000 UTC m=+0.243441001 container attach 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.684909503 +0000 UTC m=+0.244626405 container died 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-33ad719a465cd63d045b6a6340cb76dcec5585153630d9c410f5ecb056492da3-merged.mount: Deactivated successfully.
Dec  3 01:55:00 compute-0 podman[415758]: 2025-12-03 01:55:00.740720502 +0000 UTC m=+0.300437394 container remove 112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:55:00 compute-0 systemd[1]: libpod-conmon-112bb404c19eadfad88b29a8666c1f2513a281b84a9173f9fbdd59b21d234335.scope: Deactivated successfully.
Dec  3 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.008634809 +0000 UTC m=+0.097421045 container create f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:00.973269052 +0000 UTC m=+0.062055378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:55:01 compute-0 systemd[1]: Started libpod-conmon-f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4.scope.
Dec  3 01:55:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.194667405 +0000 UTC m=+0.283453721 container init f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.235711093 +0000 UTC m=+0.324497319 container start f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:55:01 compute-0 podman[415797]: 2025-12-03 01:55:01.240720236 +0000 UTC m=+0.329506472 container attach f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.275 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.277 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.277 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:01 compute-0 nova_compute[351485]: 2025-12-03 01:55:01.278 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:55:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:55:01 compute-0 openstack_network_exporter[368278]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:55:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:55:02 compute-0 distracted_knuth[415812]: {
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:    "0": [
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:        {
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "devices": [
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "/dev/loop3"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            ],
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_name": "ceph_lv0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_size": "21470642176",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "name": "ceph_lv0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "tags": {
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cluster_name": "ceph",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.crush_device_class": "",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.encrypted": "0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osd_id": "0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.type": "block",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.vdo": "0"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            },
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "type": "block",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "vg_name": "ceph_vg0"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:        }
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:    ],
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:    "1": [
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:        {
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "devices": [
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "/dev/loop4"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            ],
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_name": "ceph_lv1",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_size": "21470642176",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "name": "ceph_lv1",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "tags": {
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cluster_name": "ceph",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.crush_device_class": "",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.encrypted": "0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osd_id": "1",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.type": "block",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.vdo": "0"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            },
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "type": "block",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "vg_name": "ceph_vg1"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:        }
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:    ],
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:    "2": [
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:        {
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "devices": [
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "/dev/loop5"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            ],
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_name": "ceph_lv2",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_size": "21470642176",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "name": "ceph_lv2",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "tags": {
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.cluster_name": "ceph",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.crush_device_class": "",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.encrypted": "0",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osd_id": "2",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.type": "block",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:                "ceph.vdo": "0"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            },
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "type": "block",
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:            "vg_name": "ceph_vg2"
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:        }
Dec  3 01:55:02 compute-0 distracted_knuth[415812]:    ]
Dec  3 01:55:02 compute-0 distracted_knuth[415812]: }
Dec  3 01:55:02 compute-0 systemd[1]: libpod-f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4.scope: Deactivated successfully.
Dec  3 01:55:02 compute-0 podman[415797]: 2025-12-03 01:55:02.20297101 +0000 UTC m=+1.291757296 container died f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:55:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 49 op/s
Dec  3 01:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-56f3018d192268e15ca9bd1bfb0be8f81057b2cbf4cf8f626a7fbd70518207e2-merged.mount: Deactivated successfully.
Dec  3 01:55:02 compute-0 podman[415797]: 2025-12-03 01:55:02.397149178 +0000 UTC m=+1.485935424 container remove f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:55:02 compute-0 podman[415834]: 2025-12-03 01:55:02.403785477 +0000 UTC m=+0.148731036 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 01:55:02 compute-0 systemd[1]: libpod-conmon-f16313d262f155cfb3c4ca9b00b05c251e6163a3ebe1814da6ae91183046c6b4.scope: Deactivated successfully.
Dec  3 01:55:02 compute-0 podman[415835]: 2025-12-03 01:55:02.410400515 +0000 UTC m=+0.144296719 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:55:02 compute-0 podman[415826]: 2025-12-03 01:55:02.482126347 +0000 UTC m=+0.230605456 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  3 01:55:02 compute-0 nova_compute[351485]: 2025-12-03 01:55:02.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:02 compute-0 nova_compute[351485]: 2025-12-03 01:55:02.741 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.605 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.610386956 +0000 UTC m=+0.102485948 container create 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.616 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:03 compute-0 ovn_controller[89134]: 2025-12-03T01:55:03Z|00032|binding|INFO|Releasing lport 8c8945aa-32be-4ced-a7fe-2b9502f30008 from this chassis (sb_readonly=0)
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6400] manager: (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6410] device (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6468] manager: (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6513] device (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.56910515 +0000 UTC m=+0.061204182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6590] manager: (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6624] manager: (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6653] device (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  3 01:55:03 compute-0 NetworkManager[48912]: <info>  [1764726903.6681] device (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  3 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:03 compute-0 ovn_controller[89134]: 2025-12-03T01:55:03Z|00033|binding|INFO|Releasing lport 8c8945aa-32be-4ced-a7fe-2b9502f30008 from this chassis (sb_readonly=0)
Dec  3 01:55:03 compute-0 systemd[1]: Started libpod-conmon-739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea.scope.
Dec  3 01:55:03 compute-0 nova_compute[351485]: 2025-12-03 01:55:03.705 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.771910684 +0000 UTC m=+0.264009696 container init 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.790853513 +0000 UTC m=+0.282952515 container start 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.797163173 +0000 UTC m=+0.289262155 container attach 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec  3 01:55:03 compute-0 frosty_grothendieck[416050]: 167 167
Dec  3 01:55:03 compute-0 systemd[1]: libpod-739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea.scope: Deactivated successfully.
Dec  3 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.804612015 +0000 UTC m=+0.296711027 container died 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 01:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3378d32cb104e2c03402f697ec5bf9803e37ed33db991c0951f99fb72692d531-merged.mount: Deactivated successfully.
Dec  3 01:55:03 compute-0 podman[416033]: 2025-12-03 01:55:03.871696665 +0000 UTC m=+0.363795657 container remove 739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:55:03 compute-0 systemd[1]: libpod-conmon-739246c4a34f5eaf8f64a1485c3ae7a29bd845ab17a71805c9816366638edbea.scope: Deactivated successfully.
Dec  3 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.194890355 +0000 UTC m=+0.117214007 container create e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.212 351492 DEBUG nova.compute.manager [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.213 351492 DEBUG nova.compute.manager [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing instance network info cache due to event network-changed-d2a50b9b-c23e-4e96-a247-ba01de01a3f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.214 351492 DEBUG oslo_concurrency.lockutils [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.215 351492 DEBUG oslo_concurrency.lockutils [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:55:04 compute-0 nova_compute[351485]: 2025-12-03 01:55:04.216 351492 DEBUG nova.network.neutron [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Refreshing network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 01:55:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 18 op/s
Dec  3 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.160231289 +0000 UTC m=+0.082555001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:55:04 compute-0 systemd[1]: Started libpod-conmon-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope.
Dec  3 01:55:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.40300567 +0000 UTC m=+0.325329352 container init e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.42900244 +0000 UTC m=+0.351326102 container start e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:55:04 compute-0 podman[416074]: 2025-12-03 01:55:04.440975411 +0000 UTC m=+0.363299083 container attach e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.561 351492 DEBUG nova.network.neutron [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated VIF entry in instance network info cache for port d2a50b9b-c23e-4e96-a247-ba01de01a3f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.563 351492 DEBUG nova.network.neutron [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:55:05 compute-0 cool_blackburn[416089]: {
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "osd_id": 2,
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "type": "bluestore"
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:    },
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "osd_id": 1,
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "type": "bluestore"
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:    },
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "osd_id": 0,
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:        "type": "bluestore"
Dec  3 01:55:05 compute-0 cool_blackburn[416089]:    }
Dec  3 01:55:05 compute-0 cool_blackburn[416089]: }
Dec  3 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:55:05 compute-0 nova_compute[351485]: 2025-12-03 01:55:05.584 351492 DEBUG oslo_concurrency.lockutils [req-e3358f44-bbad-407e-bcec-42a742727d38 req-69199a08-f096-49a8-a3a8-8032bb048934 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:55:05 compute-0 systemd[1]: libpod-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope: Deactivated successfully.
Dec  3 01:55:05 compute-0 podman[416074]: 2025-12-03 01:55:05.622084185 +0000 UTC m=+1.544407857 container died e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:55:05 compute-0 systemd[1]: libpod-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope: Consumed 1.188s CPU time.
Dec  3 01:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc8da6633541e7760393ad0d9903b63cfba90e77faa0274e3383f93a982b1bbf-merged.mount: Deactivated successfully.
Dec  3 01:55:05 compute-0 podman[416074]: 2025-12-03 01:55:05.728315919 +0000 UTC m=+1.650639541 container remove e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 01:55:05 compute-0 systemd[1]: libpod-conmon-e899f1cb505a92ccc9c24876e05b8092a81929aa5992103e3314f6647c79be90.scope: Deactivated successfully.
Dec  3 01:55:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:55:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:55:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:55:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:55:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f2561ab9-76ef-41cc-a767-7a30e9957b55 does not exist
Dec  3 01:55:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 460d5439-d1a0-44c2-a16e-1753c0db1142 does not exist
Dec  3 01:55:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 op/s
Dec  3 01:55:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:55:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:55:07 compute-0 nova_compute[351485]: 2025-12-03 01:55:07.744 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:07.996 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:55:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:08 compute-0 nova_compute[351485]: 2025-12-03 01:55:08.611 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:08 compute-0 podman[416183]: 2025-12-03 01:55:08.909468771 +0000 UTC m=+0.152980826 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:55:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:11 compute-0 podman[416203]: 2025-12-03 01:55:11.921457457 +0000 UTC m=+0.176744063 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:55:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:12 compute-0 nova_compute[351485]: 2025-12-03 01:55:12.746 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:13 compute-0 nova_compute[351485]: 2025-12-03 01:55:13.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:15 compute-0 podman[416224]: 2025-12-03 01:55:15.871715473 +0000 UTC m=+0.101274735 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:55:15 compute-0 podman[416223]: 2025-12-03 01:55:15.879461484 +0000 UTC m=+0.117740773 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 01:55:15 compute-0 podman[416222]: 2025-12-03 01:55:15.908016127 +0000 UTC m=+0.154725216 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 01:55:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:17 compute-0 nova_compute[351485]: 2025-12-03 01:55:17.749 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:17 compute-0 podman[416287]: 2025-12-03 01:55:17.868009114 +0000 UTC m=+0.115822557 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.915512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917915634, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1328, "num_deletes": 251, "total_data_size": 1962161, "memory_usage": 1985808, "flush_reason": "Manual Compaction"}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917930327, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1920085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24202, "largest_seqno": 25529, "table_properties": {"data_size": 1913769, "index_size": 3519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13422, "raw_average_key_size": 20, "raw_value_size": 1900981, "raw_average_value_size": 2841, "num_data_blocks": 158, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726798, "oldest_key_time": 1764726798, "file_creation_time": 1764726917, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14910 microseconds, and 6887 cpu microseconds.
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.930419) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1920085 bytes OK
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.930440) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.933976) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.933991) EVENT_LOG_v1 {"time_micros": 1764726917933987, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.934009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1956178, prev total WAL file size 1956178, number of live WAL files 2.
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.935176) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1875KB)], [56(7162KB)]
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917935249, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9254376, "oldest_snapshot_seqno": -1}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4615 keys, 7483625 bytes, temperature: kUnknown
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917991934, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7483625, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7452511, "index_size": 18460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115610, "raw_average_key_size": 25, "raw_value_size": 7368578, "raw_average_value_size": 1596, "num_data_blocks": 765, "num_entries": 4615, "num_filter_entries": 4615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764726917, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.992172) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7483625 bytes
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.994323) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.0 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.7) write-amplify(3.9) OK, records in: 5133, records dropped: 518 output_compression: NoCompression
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.994343) EVENT_LOG_v1 {"time_micros": 1764726917994334, "job": 30, "event": "compaction_finished", "compaction_time_micros": 56759, "compaction_time_cpu_micros": 26373, "output_level": 6, "num_output_files": 1, "total_output_size": 7483625, "num_input_records": 5133, "num_output_records": 4615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917994925, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764726917996552, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.935052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:55:17 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:55:17.996732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:55:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:18 compute-0 nova_compute[351485]: 2025-12-03 01:55:18.618 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.503 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.505 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.514 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67ddc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:55:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:19.858 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 01:55:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.510 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1848 Content-Type: application/json Date: Wed, 03 Dec 2025 01:55:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6bc6a690-07c9-4ed7-b8a7-bf9b31bd76e4 x-openstack-request-id: req-6bc6a690-07c9-4ed7-b8a7-bf9b31bd76e4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.510 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "9182286b-5a08-4961-b4bb-c0e2f05746f7", "name": "test_0", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T01:54:29Z", "updated": "2025-12-03T01:54:47Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8f:a6:32"}, {"version": 4, "addr": "192.168.122.241", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8f:a6:32"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T01:54:47.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.510 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/9182286b-5a08-4961-b4bb-c0e2f05746f7 used request id req-6bc6a690-07c9-4ed7-b8a7-bf9b31bd76e4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T01:55:20.512958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.556 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 33.30859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.558 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.559 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T01:55:20.558585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.564 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 9182286b-5a08-4961-b4bb-c0e2f05746f7 / tapd2a50b9b-c2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.564 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T01:55:20.566476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.567 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T01:55:20.568448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.570 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T01:55:20.570008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.571 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T01:55:20.571650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T01:55:20.573156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.600 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.601 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.601 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T01:55:20.603734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.604 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.604 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.606 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T01:55:20.607414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.700 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 19901952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.701 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 1077248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.702 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 55470 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.705 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T01:55:20.704997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1553742100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 104971917 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 23637421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T01:55:20.706986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.712 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.712 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 50 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T01:55:20.711135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.715 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T01:55:20.716568) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.718 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T01:55:20.718140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 17526784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T01:55:20.720254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.720 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 5054541085 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T01:55:20.722070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.722 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.724 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T01:55:20.723896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.725 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.726 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T01:55:20.730291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 30980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.735 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.735 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.735 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T01:55:20.731485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T01:55:20.732506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T01:55:20.733476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.736 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T01:55:20.734770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T01:55:20.736672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T01:55:20.737925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.740 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T01:55:20.740125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.744 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.744 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T01:55:20.743725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:55:20.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:55:21 compute-0 ovn_controller[89134]: 2025-12-03T01:55:21Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:a6:32 192.168.0.5
Dec  3 01:55:21 compute-0 ovn_controller[89134]: 2025-12-03T01:55:21Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:a6:32 192.168.0.5
Dec  3 01:55:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 54 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 545 KiB/s wr, 15 op/s
Dec  3 01:55:22 compute-0 nova_compute[351485]: 2025-12-03 01:55:22.753 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:23 compute-0 nova_compute[351485]: 2025-12-03 01:55:23.620 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 61 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 945 KiB/s wr, 25 op/s
Dec  3 01:55:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  3 01:55:27 compute-0 nova_compute[351485]: 2025-12-03 01:55:27.756 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:55:28
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'images', 'cephfs.cephfs.data', '.mgr']
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:55:28 compute-0 nova_compute[351485]: 2025-12-03 01:55:28.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:55:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:55:29 compute-0 podman[158098]: time="2025-12-03T01:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:55:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:55:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8614 "" "Go-http-client/1.1"
Dec  3 01:55:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  3 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:55:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:55:31 compute-0 openstack_network_exporter[368278]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:55:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:55:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  3 01:55:32 compute-0 nova_compute[351485]: 2025-12-03 01:55:32.762 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:32 compute-0 podman[416315]: 2025-12-03 01:55:32.869640408 +0000 UTC m=+0.114207393 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 01:55:32 compute-0 podman[416317]: 2025-12-03 01:55:32.907284289 +0000 UTC m=+0.136042734 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 01:55:32 compute-0 podman[416316]: 2025-12-03 01:55:32.912303932 +0000 UTC m=+0.146710687 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 01:55:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:33 compute-0 ovn_controller[89134]: 2025-12-03T01:55:33Z|00034|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  3 01:55:33 compute-0 nova_compute[351485]: 2025-12-03 01:55:33.630 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 975 KiB/s wr, 41 op/s
Dec  3 01:55:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 575 KiB/s wr, 31 op/s
Dec  3 01:55:37 compute-0 nova_compute[351485]: 2025-12-03 01:55:37.762 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005514586182197044 of space, bias 1.0, pg target 0.1654375854659113 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:55:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:55:38 compute-0 nova_compute[351485]: 2025-12-03 01:55:38.634 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:39 compute-0 podman[416372]: 2025-12-03 01:55:39.891842047 +0000 UTC m=+0.136052324 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec  3 01:55:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 01:55:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:42 compute-0 nova_compute[351485]: 2025-12-03 01:55:42.766 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:42 compute-0 podman[416392]: 2025-12-03 01:55:42.886142518 +0000 UTC m=+0.128499559 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 01:55:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:43 compute-0 nova_compute[351485]: 2025-12-03 01:55:43.638 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:46 compute-0 podman[416414]: 2025-12-03 01:55:46.894217096 +0000 UTC m=+0.137057563 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-minimal-container)
Dec  3 01:55:46 compute-0 podman[416415]: 2025-12-03 01:55:46.897221622 +0000 UTC m=+0.137518546 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:55:46 compute-0 podman[416413]: 2025-12-03 01:55:46.958805345 +0000 UTC m=+0.206318825 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:55:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:55:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894116698' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:55:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:55:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894116698' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:55:47 compute-0 nova_compute[351485]: 2025-12-03 01:55:47.771 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:48 compute-0 podman[416478]: 2025-12-03 01:55:48.465207349 +0000 UTC m=+0.160932283 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:55:48 compute-0 nova_compute[351485]: 2025-12-03 01:55:48.642 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:49.685 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 01:55:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:49.686 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 01:55:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:49.688 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:55:49 compute-0 nova_compute[351485]: 2025-12-03 01:55:49.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:55:52 compute-0 nova_compute[351485]: 2025-12-03 01:55:52.775 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:53 compute-0 nova_compute[351485]: 2025-12-03 01:55:53.647 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec  3 01:55:55 compute-0 nova_compute[351485]: 2025-12-03 01:55:55.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:55 compute-0 nova_compute[351485]: 2025-12-03 01:55:55.611 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.527 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.528 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.552 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.620 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.624 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.708 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.709 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.725 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.725 351492 INFO nova.compute.claims [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 01:55:56 compute-0 nova_compute[351485]: 2025-12-03 01:55:56.927 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:56 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 01:55:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:55:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115611715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.166 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.266 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.267 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:55:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:55:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773167511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.505 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.522 351492 DEBUG nova.compute.provider_tree [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.543 351492 DEBUG nova.scheduler.client.report [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.575 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.577 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.643 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.644 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.674 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.720 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.781 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.830 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.832 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.833 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Creating image(s)#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.882 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:55:57 compute-0 nova_compute[351485]: 2025-12-03 01:55:57.938 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:55:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.001 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.009 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.146 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.146 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.147 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.148 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.195 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.205 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 52862152-12c7-4236-89c3-67750ecbed7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.324 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.325 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4017MB free_disk=59.9552001953125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.326 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.326 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.406 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:55:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.484 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.650 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.680 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 52862152-12c7-4236-89c3-67750ecbed7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:55:58 compute-0 nova_compute[351485]: 2025-12-03 01:55:58.829 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 01:55:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:55:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797809507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.069 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.087 351492 DEBUG nova.objects.instance [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 52862152-12c7-4236-89c3-67750ecbed7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.097 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.152 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.217 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.230 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.264 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.292 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.293 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.326 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.327 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.328 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.329 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.389 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:55:59 compute-0 nova_compute[351485]: 2025-12-03 01:55:59.402 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:55:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:55:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:55:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:55:59.622 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:55:59 compute-0 podman[158098]: time="2025-12-03T01:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:55:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:55:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8617 "" "Go-http-client/1.1"
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.089 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 105 MiB data, 205 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.328 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.329 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.329 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.350 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.352 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Ensure instance console log exists: /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.352 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.353 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.354 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.359 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.429 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Successfully updated port: 521d2181-8f17-4f4d-a3a6-98de1e17b734 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.450 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.450 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.451 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.535 351492 DEBUG nova.compute.manager [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.536 351492 DEBUG nova.compute.manager [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing instance network info cache due to event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.536 351492 DEBUG oslo_concurrency.lockutils [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.572 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.573 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.573 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.573 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:56:00 compute-0 nova_compute[351485]: 2025-12-03 01:56:00.671 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:56:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:56:01 compute-0 openstack_network_exporter[368278]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:56:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.220 351492 DEBUG nova.network.neutron [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.245 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.246 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance network_info: |[{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.246 351492 DEBUG oslo_concurrency.lockutils [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.247 351492 DEBUG nova.network.neutron [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.252 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start _get_guest_xml network_info=[{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 01:56:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 110 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.267 351492 WARNING nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.282 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.283 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.290 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.290 351492 DEBUG nova.virt.libvirt.host [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.291 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.291 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.292 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.292 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.293 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.294 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.294 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.294 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.295 351492 DEBUG nova.virt.hardware [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.298 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.326 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.344 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.345 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.347 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.781 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 01:56:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955108270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.814 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:02 compute-0 nova_compute[351485]: 2025-12-03 01:56:02.816 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:56:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 01:56:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112121715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.340 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.392 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.401 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.655 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:03 compute-0 podman[416953]: 2025-12-03 01:56:03.844141222 +0000 UTC m=+0.090936890 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 01:56:03 compute-0 podman[416955]: 2025-12-03 01:56:03.858660355 +0000 UTC m=+0.090939530 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:56:03 compute-0 podman[416954]: 2025-12-03 01:56:03.867408564 +0000 UTC m=+0.121224072 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 01:56:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 01:56:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149362390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.923 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.924 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',id=2,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-eunmeq81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:55:57Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  3 01:56:03 compute-0 nova_compute[351485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=52862152-12c7-4236-89c3-67750ecbed7a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.925 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.926 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.928 351492 DEBUG nova.objects.instance [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52862152-12c7-4236-89c3-67750ecbed7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.949 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] End _get_guest_xml xml=<domain type="kvm">
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <uuid>52862152-12c7-4236-89c3-67750ecbed7a</uuid>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <name>instance-00000002</name>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <memory>524288</memory>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <metadata>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <nova:name>vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l</nova:name>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 01:56:02</nova:creationTime>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <nova:flavor name="m1.small">
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:memory>512</nova:memory>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <nova:port uuid="521d2181-8f17-4f4d-a3a6-98de1e17b734">
Dec  3 01:56:03 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="192.168.0.178" ipVersion="4"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  </metadata>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <system>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <entry name="serial">52862152-12c7-4236-89c3-67750ecbed7a</entry>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <entry name="uuid">52862152-12c7-4236-89c3-67750ecbed7a</entry>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </system>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <os>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  </os>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <features>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <apic/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  </features>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  </clock>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  </cpu>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  <devices>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/52862152-12c7-4236-89c3-67750ecbed7a_disk">
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </source>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </auth>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/52862152-12c7-4236-89c3-67750ecbed7a_disk.eph0">
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </source>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </auth>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <target dev="vdb" bus="virtio"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/52862152-12c7-4236-89c3-67750ecbed7a_disk.config">
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </source>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 01:56:03 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      </auth>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </disk>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:8e:09:91"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <target dev="tap521d2181-8f"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </interface>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/console.log" append="off"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </serial>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <video>
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </video>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </rng>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 01:56:03 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 01:56:03 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 01:56:03 compute-0 nova_compute[351485]:  </devices>
Dec  3 01:56:03 compute-0 nova_compute[351485]: </domain>
Dec  3 01:56:03 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.950 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Preparing to wait for external event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.950 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.951 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.951 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.952 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T01:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',id=2,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-eunmeq81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T01:55:57Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  3 01:56:03 compute-0 nova_compute[351485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=52862152-12c7-4236-89c3-67750ecbed7a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.952 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.953 351492 DEBUG nova.network.os_vif_util [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.953 351492 DEBUG os_vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.954 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.954 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.955 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.959 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.959 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap521d2181-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.959 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap521d2181-8f, col_values=(('external_ids', {'iface-id': '521d2181-8f17-4f4d-a3a6-98de1e17b734', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:09:91', 'vm-uuid': '52862152-12c7-4236-89c3-67750ecbed7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.961 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:03 compute-0 NetworkManager[48912]: <info>  [1764726963.9627] manager: (tap521d2181-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.965 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.972 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:03 compute-0 nova_compute[351485]: 2025-12-03 01:56:03.973 351492 INFO os_vif [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f')#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.047 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.048 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.048 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.049 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:8e:09:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.049 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Using config drive#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.100 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:56:04 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 01:56:03.924 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 01:56:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  3 01:56:04 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 01:56:03.952 351492 DEBUG nova.virt.libvirt.vif [None req-c1caf01b-ee [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.453 351492 DEBUG nova.network.neutron [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated VIF entry in instance network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.454 351492 DEBUG nova.network.neutron [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.476 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Creating config drive at /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.488 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlf3n0le execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.518 351492 DEBUG oslo_concurrency.lockutils [req-ff63cc41-5b5b-49a8-93bd-6f06fc6f7dcc req-ac519cad-eca9-4e59-89a6-6dda4300ead0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.634 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlf3n0le" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.703 351492 DEBUG nova.storage.rbd_utils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 52862152-12c7-4236-89c3-67750ecbed7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 01:56:04 compute-0 nova_compute[351485]: 2025-12-03 01:56:04.726 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config 52862152-12c7-4236-89c3-67750ecbed7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.010 351492 DEBUG oslo_concurrency.processutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config 52862152-12c7-4236-89c3-67750ecbed7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.011 351492 INFO nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deleting local config drive /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a/disk.config because it was imported into RBD.#033[00m
Dec  3 01:56:05 compute-0 kernel: tap521d2181-8f: entered promiscuous mode
Dec  3 01:56:05 compute-0 NetworkManager[48912]: <info>  [1764726965.1244] manager: (tap521d2181-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  3 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00035|binding|INFO|Claiming lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 for this chassis.
Dec  3 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00036|binding|INFO|521d2181-8f17-4f4d-a3a6-98de1e17b734: Claiming fa:16:3e:8e:09:91 192.168.0.178
Dec  3 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.129 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.142 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:09:91 192.168.0.178'], port_security=['fa:16:3e:8e:09:91 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '52862152-12c7-4236-89c3-67750ecbed7a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=521d2181-8f17-4f4d-a3a6-98de1e17b734) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.145 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 521d2181-8f17-4f4d-a3a6-98de1e17b734 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.148 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d#033[00m
Dec  3 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00037|binding|INFO|Setting lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 ovn-installed in OVS
Dec  3 01:56:05 compute-0 ovn_controller[89134]: 2025-12-03T01:56:05Z|00038|binding|INFO|Setting lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 up in Southbound
Dec  3 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.171 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.175 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.183 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5d05f82f-e0d9-474d-bd0a-14eb588fd414]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:56:05 compute-0 systemd-udevd[417086]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 01:56:05 compute-0 systemd-machined[138558]: New machine qemu-2-instance-00000002.
Dec  3 01:56:05 compute-0 NetworkManager[48912]: <info>  [1764726965.2152] device (tap521d2181-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 01:56:05 compute-0 NetworkManager[48912]: <info>  [1764726965.2168] device (tap521d2181-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 01:56:05 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.235 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcb80a8-6923-4c2e-ab5e-11f6dcd7078c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.239 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e6042ccd-b61a-4190-ba77-3d74b94823b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.271 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0cd2e0-e7f3-43dc-a985-2da3630e13ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.297 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5355c38e-331d-4e4b-94c5-65f724bf0a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 36425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417096, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.322 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[69d426ef-7bd0-4378-a228-039bffee61c0]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417100, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417100, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.324 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.327 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:05 compute-0 nova_compute[351485]: 2025-12-03 01:56:05.329 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.330 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.331 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.331 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 01:56:05 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:05.332 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.154 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726966.1533508, 52862152-12c7-4236-89c3-67750ecbed7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.155 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Started (Lifecycle Event)#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.183 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.193 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726966.1536407, 52862152-12c7-4236-89c3-67750ecbed7a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.193 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Paused (Lifecycle Event)#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.222 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.230 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 01:56:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.256 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.341 351492 DEBUG nova.compute.manager [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.341 351492 DEBUG oslo_concurrency.lockutils [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.342 351492 DEBUG oslo_concurrency.lockutils [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.342 351492 DEBUG oslo_concurrency.lockutils [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.342 351492 DEBUG nova.compute.manager [req-4e55141f-e8c6-4667-96b9-0c9e88cb3747 req-84e5b453-6fdb-4910-99db-3396bb7921bb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Processing event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.343 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.351 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764726966.3509345, 52862152-12c7-4236-89c3-67750ecbed7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.352 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Resumed (Lifecycle Event)#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.357 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.366 351492 INFO nova.virt.libvirt.driver [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance spawned successfully.#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.367 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.374 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.385 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.410 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.411 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.412 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.413 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.414 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.416 351492 DEBUG nova.virt.libvirt.driver [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.446 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.474 351492 INFO nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 8.64 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.475 351492 DEBUG nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.542 351492 INFO nova.compute.manager [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 9.88 seconds to build instance.#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.575 351492 DEBUG oslo_concurrency.lockutils [None req-c1caf01b-ee93-442d-a833-2e7c2efb3fbe 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:06 compute-0 nova_compute[351485]: 2025-12-03 01:56:06.588 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:56:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aeef6703-2b60-4598-920d-609c8ef8eaed does not exist
Dec  3 01:56:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2ebdf4ef-d338-458c-9dbe-4e56ac8a51e3 does not exist
Dec  3 01:56:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2aa095f1-2ca6-45f7-b318-7d8a7cc33b59 does not exist
Dec  3 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:56:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:56:07 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:56:07 compute-0 nova_compute[351485]: 2025-12-03 01:56:07.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:07 compute-0 nova_compute[351485]: 2025-12-03 01:56:07.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:56:07 compute-0 nova_compute[351485]: 2025-12-03 01:56:07.783 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Dec  3 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.404882497 +0000 UTC m=+0.086544985 container create 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.430 351492 DEBUG nova.compute.manager [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.431 351492 DEBUG oslo_concurrency.lockutils [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.431 351492 DEBUG oslo_concurrency.lockutils [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.432 351492 DEBUG oslo_concurrency.lockutils [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.432 351492 DEBUG nova.compute.manager [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] No waiting events found dispatching network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.433 351492 WARNING nova.compute.manager [req-8e536ce9-1b23-4cf2-982e-16b472bfcb35 req-0533b843-376c-4054-8617-107e9bf6d92f 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received unexpected event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 for instance with vm_state active and task_state None.#033[00m
Dec  3 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.371941549 +0000 UTC m=+0.053604127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:56:08 compute-0 systemd[1]: Started libpod-conmon-175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d.scope.
Dec  3 01:56:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.534212119 +0000 UTC m=+0.215874647 container init 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.559377645 +0000 UTC m=+0.241040113 container start 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.564662116 +0000 UTC m=+0.246324604 container attach 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:56:08 compute-0 vibrant_benz[417446]: 167 167
Dec  3 01:56:08 compute-0 systemd[1]: libpod-175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d.scope: Deactivated successfully.
Dec  3 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.569475423 +0000 UTC m=+0.251137941 container died 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 01:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbd2a1c57b8847ef4fb04911e0baf7c4d8a0b52db48ad24c3319e442a6eed06a-merged.mount: Deactivated successfully.
Dec  3 01:56:08 compute-0 podman[417430]: 2025-12-03 01:56:08.640070583 +0000 UTC m=+0.321733071 container remove 175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 01:56:08 compute-0 systemd[1]: libpod-conmon-175e31c39f4cef2494dd7094b3dcd374b8318d90856701453c28b618d84de12d.scope: Deactivated successfully.
Dec  3 01:56:08 compute-0 podman[417469]: 2025-12-03 01:56:08.876772551 +0000 UTC m=+0.087506992 container create 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 01:56:08 compute-0 podman[417469]: 2025-12-03 01:56:08.838903593 +0000 UTC m=+0.049638094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:56:08 compute-0 nova_compute[351485]: 2025-12-03 01:56:08.961 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:08 compute-0 systemd[1]: Started libpod-conmon-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope.
Dec  3 01:56:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:09 compute-0 podman[417469]: 2025-12-03 01:56:09.057066394 +0000 UTC m=+0.267800895 container init 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:56:09 compute-0 podman[417469]: 2025-12-03 01:56:09.070901378 +0000 UTC m=+0.281635789 container start 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:56:09 compute-0 podman[417469]: 2025-12-03 01:56:09.077008982 +0000 UTC m=+0.287743473 container attach 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 01:56:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Dec  3 01:56:10 compute-0 goofy_taussig[417485]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:56:10 compute-0 goofy_taussig[417485]: --> relative data size: 1.0
Dec  3 01:56:10 compute-0 goofy_taussig[417485]: --> All data devices are unavailable
Dec  3 01:56:10 compute-0 systemd[1]: libpod-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope: Deactivated successfully.
Dec  3 01:56:10 compute-0 systemd[1]: libpod-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope: Consumed 1.160s CPU time.
Dec  3 01:56:10 compute-0 conmon[417485]: conmon 6bf8d50d0b9a40bb9bee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope/container/memory.events
Dec  3 01:56:10 compute-0 podman[417469]: 2025-12-03 01:56:10.306683858 +0000 UTC m=+1.517418319 container died 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:56:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1c642cd8f6f6e14dbfe6f9fcba600a064f5376673b6e5f6f5929f42f1211327-merged.mount: Deactivated successfully.
Dec  3 01:56:10 compute-0 podman[417469]: 2025-12-03 01:56:10.411597785 +0000 UTC m=+1.622332196 container remove 6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec  3 01:56:10 compute-0 systemd[1]: libpod-conmon-6bf8d50d0b9a40bb9beec656977fe194ef69400e7d1b7a32aaff587b3d7e956b.scope: Deactivated successfully.
Dec  3 01:56:10 compute-0 podman[417514]: 2025-12-03 01:56:10.467810516 +0000 UTC m=+0.102619483 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.651867563 +0000 UTC m=+0.119597595 container create 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.598231066 +0000 UTC m=+0.065961158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:56:11 compute-0 systemd[1]: Started libpod-conmon-5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789.scope.
Dec  3 01:56:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.843405946 +0000 UTC m=+0.311136028 container init 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.860706858 +0000 UTC m=+0.328436880 container start 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.866499563 +0000 UTC m=+0.334229595 container attach 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:56:11 compute-0 goofy_kalam[417699]: 167 167
Dec  3 01:56:11 compute-0 systemd[1]: libpod-5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789.scope: Deactivated successfully.
Dec  3 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.874874291 +0000 UTC m=+0.342604313 container died 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:56:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-25e97860783b8e7b3cb13c4f4a325da6bc61b5582a227c7696b8e711dc9935b3-merged.mount: Deactivated successfully.
Dec  3 01:56:11 compute-0 podman[417684]: 2025-12-03 01:56:11.961870928 +0000 UTC m=+0.429600970 container remove 5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kalam, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:56:11 compute-0 systemd[1]: libpod-conmon-5715ba0b9e32669fba19f1a6790bce7e6d6cf8ab2726d85c431a7a602ca0f789.scope: Deactivated successfully.
Dec  3 01:56:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 112 KiB/s wr, 74 op/s
Dec  3 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.281795466 +0000 UTC m=+0.134001656 container create 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.22258954 +0000 UTC m=+0.074795750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:56:12 compute-0 systemd[1]: Started libpod-conmon-40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1.scope.
Dec  3 01:56:12 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.440761021 +0000 UTC m=+0.292967241 container init 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.455943824 +0000 UTC m=+0.308149974 container start 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 01:56:12 compute-0 podman[417722]: 2025-12-03 01:56:12.465984359 +0000 UTC m=+0.318190559 container attach 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:56:12 compute-0 nova_compute[351485]: 2025-12-03 01:56:12.785 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:13 compute-0 musing_booth[417738]: {
Dec  3 01:56:13 compute-0 musing_booth[417738]:    "0": [
Dec  3 01:56:13 compute-0 musing_booth[417738]:        {
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "devices": [
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "/dev/loop3"
Dec  3 01:56:13 compute-0 musing_booth[417738]:            ],
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_name": "ceph_lv0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_size": "21470642176",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "name": "ceph_lv0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "tags": {
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cluster_name": "ceph",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.crush_device_class": "",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.encrypted": "0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osd_id": "0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.type": "block",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.vdo": "0"
Dec  3 01:56:13 compute-0 musing_booth[417738]:            },
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "type": "block",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "vg_name": "ceph_vg0"
Dec  3 01:56:13 compute-0 musing_booth[417738]:        }
Dec  3 01:56:13 compute-0 musing_booth[417738]:    ],
Dec  3 01:56:13 compute-0 musing_booth[417738]:    "1": [
Dec  3 01:56:13 compute-0 musing_booth[417738]:        {
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "devices": [
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "/dev/loop4"
Dec  3 01:56:13 compute-0 musing_booth[417738]:            ],
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_name": "ceph_lv1",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_size": "21470642176",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "name": "ceph_lv1",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "tags": {
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cluster_name": "ceph",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.crush_device_class": "",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.encrypted": "0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osd_id": "1",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.type": "block",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.vdo": "0"
Dec  3 01:56:13 compute-0 musing_booth[417738]:            },
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "type": "block",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "vg_name": "ceph_vg1"
Dec  3 01:56:13 compute-0 musing_booth[417738]:        }
Dec  3 01:56:13 compute-0 musing_booth[417738]:    ],
Dec  3 01:56:13 compute-0 musing_booth[417738]:    "2": [
Dec  3 01:56:13 compute-0 musing_booth[417738]:        {
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "devices": [
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "/dev/loop5"
Dec  3 01:56:13 compute-0 musing_booth[417738]:            ],
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_name": "ceph_lv2",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_size": "21470642176",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "name": "ceph_lv2",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "tags": {
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.cluster_name": "ceph",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.crush_device_class": "",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.encrypted": "0",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osd_id": "2",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.type": "block",
Dec  3 01:56:13 compute-0 musing_booth[417738]:                "ceph.vdo": "0"
Dec  3 01:56:13 compute-0 musing_booth[417738]:            },
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "type": "block",
Dec  3 01:56:13 compute-0 musing_booth[417738]:            "vg_name": "ceph_vg2"
Dec  3 01:56:13 compute-0 musing_booth[417738]:        }
Dec  3 01:56:13 compute-0 musing_booth[417738]:    ]
Dec  3 01:56:13 compute-0 musing_booth[417738]: }
Dec  3 01:56:13 compute-0 systemd[1]: libpod-40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1.scope: Deactivated successfully.
Dec  3 01:56:13 compute-0 podman[417722]: 2025-12-03 01:56:13.321761312 +0000 UTC m=+1.173967472 container died 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 01:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e74f476dbfd6692e4324e3b6d7ebf9483b445c5cea3e4087eaf3926281820720-merged.mount: Deactivated successfully.
Dec  3 01:56:13 compute-0 podman[417722]: 2025-12-03 01:56:13.424888858 +0000 UTC m=+1.277095018 container remove 40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_booth, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:56:13 compute-0 systemd[1]: libpod-conmon-40281e6a9d31fe6a587d6154805254e8d97d6aa6fcca8e2ef1c3326d75cbdea1.scope: Deactivated successfully.
Dec  3 01:56:13 compute-0 podman[417748]: 2025-12-03 01:56:13.502190959 +0000 UTC m=+0.135417496 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Dec  3 01:56:13 compute-0 nova_compute[351485]: 2025-12-03 01:56:13.966 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 63 op/s
Dec  3 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.575442821 +0000 UTC m=+0.093724759 container create 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.536512403 +0000 UTC m=+0.054794341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:56:14 compute-0 systemd[1]: Started libpod-conmon-212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d.scope.
Dec  3 01:56:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.751887505 +0000 UTC m=+0.270169453 container init 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.776363702 +0000 UTC m=+0.294645610 container start 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.785397249 +0000 UTC m=+0.303679267 container attach 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:56:14 compute-0 fervent_kirch[417932]: 167 167
Dec  3 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.793141959 +0000 UTC m=+0.311423907 container died 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 01:56:14 compute-0 systemd[1]: libpod-212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d.scope: Deactivated successfully.
Dec  3 01:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d1cbfb2a6c878fca2591e913a9eb563d4235cbfb5a38574ed1603d1b5d82430-merged.mount: Deactivated successfully.
Dec  3 01:56:14 compute-0 podman[417916]: 2025-12-03 01:56:14.856808242 +0000 UTC m=+0.375090140 container remove 212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:56:14 compute-0 systemd[1]: libpod-conmon-212dcc2c431446bcc875269931752e9b95a1c9ca527180c7ceb9cb7603d18b6d.scope: Deactivated successfully.
Dec  3 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.111367949 +0000 UTC m=+0.088873142 container create 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.079958614 +0000 UTC m=+0.057463877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:56:15 compute-0 systemd[1]: Started libpod-conmon-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope.
Dec  3 01:56:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.284151986 +0000 UTC m=+0.261657209 container init 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.301771808 +0000 UTC m=+0.279277001 container start 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:56:15 compute-0 podman[417957]: 2025-12-03 01:56:15.307584323 +0000 UTC m=+0.285089606 container attach 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:56:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  3 01:56:16 compute-0 reverent_euclid[417973]: {
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "osd_id": 2,
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "type": "bluestore"
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:    },
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "osd_id": 1,
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "type": "bluestore"
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:    },
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "osd_id": 0,
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:        "type": "bluestore"
Dec  3 01:56:16 compute-0 reverent_euclid[417973]:    }
Dec  3 01:56:16 compute-0 reverent_euclid[417973]: }
Dec  3 01:56:16 compute-0 systemd[1]: libpod-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope: Deactivated successfully.
Dec  3 01:56:16 compute-0 systemd[1]: libpod-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope: Consumed 1.205s CPU time.
Dec  3 01:56:16 compute-0 podman[418006]: 2025-12-03 01:56:16.637788651 +0000 UTC m=+0.048694647 container died 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d002b6d7e6578f42653d786ef6942babbbc8df77bd6b704ff4b361e6cf60da4a-merged.mount: Deactivated successfully.
Dec  3 01:56:16 compute-0 podman[418006]: 2025-12-03 01:56:16.786630289 +0000 UTC m=+0.197536245 container remove 8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:56:16 compute-0 systemd[1]: libpod-conmon-8dae29c3f7c710166dcbca532860ebcf38add50f6087112c8152f8cf22e02342.scope: Deactivated successfully.
Dec  3 01:56:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:56:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:56:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:56:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:56:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4b617dac-bd3e-4cf2-acac-0194b62fc9a7 does not exist
Dec  3 01:56:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a0da2ffd-b30b-4cc4-8395-a8fb9c488253 does not exist
Dec  3 01:56:17 compute-0 podman[418044]: 2025-12-03 01:56:17.241742285 +0000 UTC m=+0.147243163 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc.)
Dec  3 01:56:17 compute-0 podman[418045]: 2025-12-03 01:56:17.242081665 +0000 UTC m=+0.145273717 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 01:56:17 compute-0 podman[418043]: 2025-12-03 01:56:17.280338534 +0000 UTC m=+0.185681307 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 01:56:17 compute-0 nova_compute[351485]: 2025-12-03 01:56:17.789 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:56:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:56:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 50 op/s
Dec  3 01:56:18 compute-0 podman[418128]: 2025-12-03 01:56:18.875473444 +0000 UTC m=+0.132955066 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:56:18 compute-0 nova_compute[351485]: 2025-12-03 01:56:18.972 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 50 op/s
Dec  3 01:56:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 502 KiB/s rd, 16 op/s
Dec  3 01:56:22 compute-0 nova_compute[351485]: 2025-12-03 01:56:22.792 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:23 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  3 01:56:23 compute-0 nova_compute[351485]: 2025-12-03 01:56:23.976 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 11 op/s
Dec  3 01:56:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 01:56:27 compute-0 nova_compute[351485]: 2025-12-03 01:56:27.794 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:56:28
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta']
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:56:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:56:28 compute-0 nova_compute[351485]: 2025-12-03 01:56:28.979 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:29 compute-0 podman[158098]: time="2025-12-03T01:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:56:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:56:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Dec  3 01:56:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:56:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:56:31 compute-0 openstack_network_exporter[368278]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:56:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:56:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 01:56:32 compute-0 nova_compute[351485]: 2025-12-03 01:56:32.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:33 compute-0 nova_compute[351485]: 2025-12-03 01:56:33.981 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 01:56:34 compute-0 podman[418156]: 2025-12-03 01:56:34.864429231 +0000 UTC m=+0.112768521 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec  3 01:56:34 compute-0 podman[418158]: 2025-12-03 01:56:34.872618434 +0000 UTC m=+0.116614551 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 01:56:34 compute-0 podman[418157]: 2025-12-03 01:56:34.884211624 +0000 UTC m=+0.133482781 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec  3 01:56:35 compute-0 ovn_controller[89134]: 2025-12-03T01:56:35Z|00039|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  3 01:56:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Dec  3 01:56:37 compute-0 nova_compute[351485]: 2025-12-03 01:56:37.799 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008203201308849384 of space, bias 1.0, pg target 0.24609603926548154 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:56:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:56:38 compute-0 nova_compute[351485]: 2025-12-03 01:56:38.985 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:56:40 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  3 01:56:40 compute-0 podman[418214]: 2025-12-03 01:56:40.900464445 +0000 UTC m=+0.145351949 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 01:56:41 compute-0 ovn_controller[89134]: 2025-12-03T01:56:41Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:09:91 192.168.0.178
Dec  3 01:56:41 compute-0 ovn_controller[89134]: 2025-12-03T01:56:41Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:09:91 192.168.0.178
Dec  3 01:56:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 115 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 381 KiB/s wr, 13 op/s
Dec  3 01:56:42 compute-0 nova_compute[351485]: 2025-12-03 01:56:42.804 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:43 compute-0 podman[418234]: 2025-12-03 01:56:43.88221072 +0000 UTC m=+0.125075882 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, config_id=edpm, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 01:56:43 compute-0 nova_compute[351485]: 2025-12-03 01:56:43.989 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 122 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 795 KiB/s wr, 28 op/s
Dec  3 01:56:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 01:56:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:56:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/322596143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:56:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:56:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/322596143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:56:47 compute-0 nova_compute[351485]: 2025-12-03 01:56:47.809 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:47 compute-0 podman[418254]: 2025-12-03 01:56:47.880511995 +0000 UTC m=+0.128234872 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Dec  3 01:56:47 compute-0 podman[418255]: 2025-12-03 01:56:47.881496933 +0000 UTC m=+0.125860044 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:56:47 compute-0 podman[418253]: 2025-12-03 01:56:47.929320164 +0000 UTC m=+0.182441355 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:56:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 01:56:48 compute-0 nova_compute[351485]: 2025-12-03 01:56:48.993 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:49 compute-0 podman[418317]: 2025-12-03 01:56:49.848928592 +0000 UTC m=+0.104557078 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:56:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 01:56:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  3 01:56:52 compute-0 nova_compute[351485]: 2025-12-03 01:56:52.815 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:53 compute-0 nova_compute[351485]: 2025-12-03 01:56:53.998 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Dec  3 01:56:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 725 KiB/s wr, 29 op/s
Dec  3 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:56:56 compute-0 nova_compute[351485]: 2025-12-03 01:56:56.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:56:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:56:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757362410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.105 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.228 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.229 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.229 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.242 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.243 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.244 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.818 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.853 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.856 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3767MB free_disk=59.92203140258789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.857 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:56:57 compute-0 nova_compute[351485]: 2025-12-03 01:56:57.859 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:56:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.140 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:56:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.343 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:56:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:56:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:56:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259349955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.899 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.914 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.937 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.970 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.971 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.972 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.972 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 01:56:58 compute-0 nova_compute[351485]: 2025-12-03 01:56:58.992 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 01:56:59 compute-0 nova_compute[351485]: 2025-12-03 01:56:59.003 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:56:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:59.621 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:56:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:59.622 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:56:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:56:59.623 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:56:59 compute-0 podman[158098]: time="2025-12-03T01:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:56:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:56:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8633 "" "Go-http-client/1.1"
Dec  3 01:57:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 01:57:00 compute-0 nova_compute[351485]: 2025-12-03 01:57:00.992 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:00 compute-0 nova_compute[351485]: 2025-12-03 01:57:00.993 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:57:00 compute-0 nova_compute[351485]: 2025-12-03 01:57:00.994 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:57:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:57:01 compute-0 openstack_network_exporter[368278]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:57:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.949 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.950 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.951 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 01:57:01 compute-0 nova_compute[351485]: 2025-12-03 01:57:01.952 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:57:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Dec  3 01:57:02 compute-0 nova_compute[351485]: 2025-12-03 01:57:02.821 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.007 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.130 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:57:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.360 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.361 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.363 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.363 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.364 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:04 compute-0 nova_compute[351485]: 2025-12-03 01:57:04.364 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:05 compute-0 podman[418387]: 2025-12-03 01:57:05.842893967 +0000 UTC m=+0.086823443 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 01:57:05 compute-0 podman[418386]: 2025-12-03 01:57:05.855675121 +0000 UTC m=+0.105536576 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 01:57:05 compute-0 podman[418388]: 2025-12-03 01:57:05.864689167 +0000 UTC m=+0.100507742 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 01:57:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:07 compute-0 nova_compute[351485]: 2025-12-03 01:57:07.825 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:08 compute-0 nova_compute[351485]: 2025-12-03 01:57:08.942 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:09 compute-0 nova_compute[351485]: 2025-12-03 01:57:09.010 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:09 compute-0 nova_compute[351485]: 2025-12-03 01:57:09.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:09 compute-0 nova_compute[351485]: 2025-12-03 01:57:09.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:57:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:11 compute-0 nova_compute[351485]: 2025-12-03 01:57:11.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:11 compute-0 podman[418446]: 2025-12-03 01:57:11.874405921 +0000 UTC m=+0.129968001 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  3 01:57:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:12 compute-0 nova_compute[351485]: 2025-12-03 01:57:12.828 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:13 compute-0 nova_compute[351485]: 2025-12-03 01:57:13.597 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:13 compute-0 nova_compute[351485]: 2025-12-03 01:57:13.598 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 01:57:14 compute-0 nova_compute[351485]: 2025-12-03 01:57:14.014 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 01:57:14 compute-0 podman[418466]: 2025-12-03 01:57:14.879372066 +0000 UTC m=+0.151396711 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, name=ubi9, release-0.7.12=, vcs-type=git, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:57:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 01:57:17 compute-0 nova_compute[351485]: 2025-12-03 01:57:17.831 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:57:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d1c20a02-8447-4aa7-8f93-24cb19a11764 does not exist
Dec  3 01:57:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 00796445-046d-4a60-bafa-0db5c34e18ec does not exist
Dec  3 01:57:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 752a43dc-9242-4610-87fd-3c67a2981d4e does not exist
Dec  3 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:57:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:57:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:57:18 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:57:18 compute-0 podman[418642]: 2025-12-03 01:57:18.8098643 +0000 UTC m=+0.108144820 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 01:57:18 compute-0 podman[418641]: 2025-12-03 01:57:18.810282802 +0000 UTC m=+0.103368124 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., vcs-type=git)
Dec  3 01:57:18 compute-0 podman[418640]: 2025-12-03 01:57:18.851128085 +0000 UTC m=+0.155217200 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:57:19 compute-0 nova_compute[351485]: 2025-12-03 01:57:19.017 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.504 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.504 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.510 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 52862152-12c7-4236-89c3-67750ecbed7a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.512 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/52862152-12c7-4236-89c3-67750ecbed7a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.541363215 +0000 UTC m=+0.089861760 container create 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.500853501 +0000 UTC m=+0.049352086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:57:19 compute-0 systemd[1]: Started libpod-conmon-039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1.scope.
Dec  3 01:57:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.70001226 +0000 UTC m=+0.248510815 container init 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.717497068 +0000 UTC m=+0.265995583 container start 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.722682515 +0000 UTC m=+0.271181070 container attach 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  3 01:57:19 compute-0 peaceful_shtern[418835]: 167 167
Dec  3 01:57:19 compute-0 systemd[1]: libpod-039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1.scope: Deactivated successfully.
Dec  3 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.732970098 +0000 UTC m=+0.281468613 container died 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2769030cee8bd57b48611438f886015a6956f0c98beb52114e5037c92899095f-merged.mount: Deactivated successfully.
Dec  3 01:57:19 compute-0 podman[418818]: 2025-12-03 01:57:19.801704425 +0000 UTC m=+0.350202950 container remove 039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:57:19 compute-0 systemd[1]: libpod-conmon-039ac1e3fe45efb051d3a9ddd2dd390eaa68068343afbbc0bf36de1c9be256b1.scope: Deactivated successfully.
Dec  3 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.064876307 +0000 UTC m=+0.106116892 container create 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.034495242 +0000 UTC m=+0.075735927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:57:20 compute-0 systemd[1]: Started libpod-conmon-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope.
Dec  3 01:57:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.225422668 +0000 UTC m=+0.266663293 container init 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.241032672 +0000 UTC m=+0.282273287 container start 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 01:57:20 compute-0 podman[418858]: 2025-12-03 01:57:20.251753487 +0000 UTC m=+0.292994092 container attach 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.252 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 03 Dec 2025 01:57:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5bfa588f-504a-43fd-9f33-4984925b3cd4 x-openstack-request-id: req-5bfa588f-504a-43fd-9f33-4984925b3cd4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.253 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "52862152-12c7-4236-89c3-67750ecbed7a", "name": "vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {"metering.server_group": "0f6ab671-23df-4a6d-9613-02f9fb5fb294"}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T01:55:54Z", "updated": "2025-12-03T01:56:06Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.178", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8e:09:91"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8e:09:91"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/52862152-12c7-4236-89c3-67750ecbed7a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/52862152-12c7-4236-89c3-67750ecbed7a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T01:56:06.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.253 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/52862152-12c7-4236-89c3-67750ecbed7a used request id req-5bfa588f-504a-43fd-9f33-4984925b3cd4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.254 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 01:57:20 compute-0 podman[418872]: 2025-12-03 01:57:20.255077842 +0000 UTC m=+0.124317210 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T01:57:20.257851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.291 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.4 KiB/s wr, 0 op/s
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.321 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T01:57:20.322808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.327 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 52862152-12c7-4236-89c3-67750ecbed7a / tap521d2181-8f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.327 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.330 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 1788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T01:57:20.332194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.333 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T01:57:20.333670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.335 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.335 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T01:57:20.335050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T01:57:20.337100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T01:57:20.338644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.371 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.371 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.372 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.398 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.398 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>]
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T01:57:20.400056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T01:57:20.401331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.467 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.468 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.468 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.558 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.559 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.560 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.562 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.563 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T01:57:20.562294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.564 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.565 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T01:57:20.565448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.566 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.566 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.567 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.568 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.568 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.570 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.571 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.571 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.572 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.572 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.572 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T01:57:20.570407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.575 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T01:57:20.575023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.576 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.577 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T01:57:20.577714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.578 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.579 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.579 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.579 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.580 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.582 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41713664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.583 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.584 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T01:57:20.582661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.585 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.585 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.586 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.588 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6674812043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.589 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.590 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.591 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T01:57:20.588465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.592 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.592 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.595 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.595 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.596 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.596 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.597 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.597 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T01:57:20.594853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T01:57:20.600172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.600 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 36190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 34570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T01:57:20.601979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T01:57:20.603712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T01:57:20.605635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.606 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 4686 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.606 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T01:57:20.608127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.608 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.609 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.609 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.610 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.610 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.612 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T01:57:20.612434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T01:57:20.614362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.616 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T01:57:20.616165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.618 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.618 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l>]
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T01:57:20.618062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.619 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.620 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.621 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:57:20.621 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:57:21 compute-0 bold_ramanujan[418881]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:57:21 compute-0 bold_ramanujan[418881]: --> relative data size: 1.0
Dec  3 01:57:21 compute-0 bold_ramanujan[418881]: --> All data devices are unavailable
Dec  3 01:57:21 compute-0 systemd[1]: libpod-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope: Deactivated successfully.
Dec  3 01:57:21 compute-0 systemd[1]: libpod-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope: Consumed 1.210s CPU time.
Dec  3 01:57:21 compute-0 podman[418858]: 2025-12-03 01:57:21.547350401 +0000 UTC m=+1.588591016 container died 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-265ed5ee5f5a28f46c44bc4a99a4e64ebf9933d6097e9eab107acc1969af54d3-merged.mount: Deactivated successfully.
Dec  3 01:57:21 compute-0 podman[418858]: 2025-12-03 01:57:21.633844423 +0000 UTC m=+1.675085008 container remove 8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 01:57:21 compute-0 systemd[1]: libpod-conmon-8702205d592ac59e33762937dff0cf34ddea7bab9fbbaee09f42cb8fce1d7889.scope: Deactivated successfully.
Dec  3 01:57:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 0 op/s
Dec  3 01:57:22 compute-0 nova_compute[351485]: 2025-12-03 01:57:22.834 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:22 compute-0 podman[419073]: 2025-12-03 01:57:22.911322521 +0000 UTC m=+0.073357619 container create 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 01:57:22 compute-0 podman[419073]: 2025-12-03 01:57:22.879101564 +0000 UTC m=+0.041136712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:57:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:22 compute-0 systemd[1]: Started libpod-conmon-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope.
Dec  3 01:57:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.078064318 +0000 UTC m=+0.240099446 container init 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.096102492 +0000 UTC m=+0.258137590 container start 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Dec  3 01:57:23 compute-0 musing_ritchie[419089]: 167 167
Dec  3 01:57:23 compute-0 systemd[1]: libpod-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope: Deactivated successfully.
Dec  3 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.108067702 +0000 UTC m=+0.270102860 container attach 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:57:23 compute-0 conmon[419089]: conmon 57948232751d62aa8fde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope/container/memory.events
Dec  3 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.109895324 +0000 UTC m=+0.271930392 container died 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca127befe674676f895259f419dcb1679fa04c130d53d4c183d092f145e679d-merged.mount: Deactivated successfully.
Dec  3 01:57:23 compute-0 podman[419073]: 2025-12-03 01:57:23.176476689 +0000 UTC m=+0.338511757 container remove 57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ritchie, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 01:57:23 compute-0 systemd[1]: libpod-conmon-57948232751d62aa8fdebd3e04cf37711d57dab5df7ac962678016f9bfd5a6e8.scope: Deactivated successfully.
Dec  3 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.451035545 +0000 UTC m=+0.096250691 container create fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.402817872 +0000 UTC m=+0.048033058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:57:23 compute-0 systemd[1]: Started libpod-conmon-fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d.scope.
Dec  3 01:57:23 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.625955635 +0000 UTC m=+0.271170841 container init fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.6458165 +0000 UTC m=+0.291031646 container start fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:57:23 compute-0 podman[419113]: 2025-12-03 01:57:23.654313552 +0000 UTC m=+0.299528758 container attach fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:57:24 compute-0 nova_compute[351485]: 2025-12-03 01:57:24.021 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 0 op/s
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]: {
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:    "0": [
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:        {
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "devices": [
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "/dev/loop3"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            ],
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_name": "ceph_lv0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_size": "21470642176",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "name": "ceph_lv0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "tags": {
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cluster_name": "ceph",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.crush_device_class": "",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.encrypted": "0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osd_id": "0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.type": "block",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.vdo": "0"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            },
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "type": "block",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "vg_name": "ceph_vg0"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:        }
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:    ],
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:    "1": [
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:        {
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "devices": [
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "/dev/loop4"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            ],
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_name": "ceph_lv1",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_size": "21470642176",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "name": "ceph_lv1",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "tags": {
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cluster_name": "ceph",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.crush_device_class": "",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.encrypted": "0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osd_id": "1",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.type": "block",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.vdo": "0"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            },
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "type": "block",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "vg_name": "ceph_vg1"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:        }
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:    ],
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:    "2": [
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:        {
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "devices": [
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "/dev/loop5"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            ],
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_name": "ceph_lv2",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_size": "21470642176",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "name": "ceph_lv2",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "tags": {
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.cluster_name": "ceph",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.crush_device_class": "",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.encrypted": "0",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osd_id": "2",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.type": "block",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:                "ceph.vdo": "0"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            },
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "type": "block",
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:            "vg_name": "ceph_vg2"
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:        }
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]:    ]
Dec  3 01:57:24 compute-0 blissful_nightingale[419129]: }
Dec  3 01:57:24 compute-0 systemd[1]: libpod-fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d.scope: Deactivated successfully.
Dec  3 01:57:24 compute-0 podman[419113]: 2025-12-03 01:57:24.492424491 +0000 UTC m=+1.137639637 container died fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc1df8fb50ee975806cce9a1daed6812da1386d4af6e7fd7b427667df858cba8-merged.mount: Deactivated successfully.
Dec  3 01:57:24 compute-0 podman[419113]: 2025-12-03 01:57:24.573281974 +0000 UTC m=+1.218497100 container remove fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:57:24 compute-0 systemd[1]: libpod-conmon-fd3e44059479ab1f7273bc62986b5f2b54b9ffea83cf6fb11e120a5a7583a08d.scope: Deactivated successfully.
Dec  3 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.703863259 +0000 UTC m=+0.090844357 container create 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.662830221 +0000 UTC m=+0.049811349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:57:25 compute-0 systemd[1]: Started libpod-conmon-0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b.scope.
Dec  3 01:57:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.857506433 +0000 UTC m=+0.244487571 container init 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.872082798 +0000 UTC m=+0.259063916 container start 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.877853422 +0000 UTC m=+0.264834550 container attach 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 01:57:25 compute-0 determined_mcclintock[419301]: 167 167
Dec  3 01:57:25 compute-0 systemd[1]: libpod-0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b.scope: Deactivated successfully.
Dec  3 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.885266173 +0000 UTC m=+0.272247291 container died 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0592ad9285dda35fa8c758e610542d6b3f18746678fe81edb59d8b6d53247249-merged.mount: Deactivated successfully.
Dec  3 01:57:25 compute-0 podman[419286]: 2025-12-03 01:57:25.978845087 +0000 UTC m=+0.365826205 container remove 0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 01:57:25 compute-0 systemd[1]: libpod-conmon-0d388107a3ce64492193b74b81b6dbd0f2a2ce83db970161babbccb3cd1ee55b.scope: Deactivated successfully.
Dec  3 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.294614167 +0000 UTC m=+0.103357664 container create fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:57:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.2 KiB/s wr, 1 op/s
Dec  3 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.244794849 +0000 UTC m=+0.053538396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:57:26 compute-0 systemd[1]: Started libpod-conmon-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope.
Dec  3 01:57:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.502920337 +0000 UTC m=+0.311663844 container init fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.520108646 +0000 UTC m=+0.328852173 container start fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 01:57:26 compute-0 podman[419326]: 2025-12-03 01:57:26.526419736 +0000 UTC m=+0.335163223 container attach fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 01:57:27 compute-0 hardcore_brown[419342]: {
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "osd_id": 2,
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "type": "bluestore"
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:    },
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "osd_id": 1,
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "type": "bluestore"
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:    },
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "osd_id": 0,
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:        "type": "bluestore"
Dec  3 01:57:27 compute-0 hardcore_brown[419342]:    }
Dec  3 01:57:27 compute-0 hardcore_brown[419342]: }
Dec  3 01:57:27 compute-0 systemd[1]: libpod-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope: Deactivated successfully.
Dec  3 01:57:27 compute-0 systemd[1]: libpod-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope: Consumed 1.210s CPU time.
Dec  3 01:57:27 compute-0 podman[419326]: 2025-12-03 01:57:27.732242093 +0000 UTC m=+1.540985610 container died fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 01:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b47354e178c7f61aa0a30d20a787908048d091d3aff3089ad8bbed6b14b285b-merged.mount: Deactivated successfully.
Dec  3 01:57:27 compute-0 nova_compute[351485]: 2025-12-03 01:57:27.836 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:27 compute-0 podman[419326]: 2025-12-03 01:57:27.850903331 +0000 UTC m=+1.659646858 container remove fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:57:27 compute-0 systemd[1]: libpod-conmon-fb11721ceb81495552b99841801f80baa7a882a3201d8be50924fe93600cf320.scope: Deactivated successfully.
Dec  3 01:57:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:57:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:57:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:57:27 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:57:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 381693e6-6dd5-441a-9813-ea855e24610a does not exist
Dec  3 01:57:27 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b1161126-419c-4642-b684-8bd36b03048e does not exist
Dec  3 01:57:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:57:28
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.control']
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:57:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:57:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:57:28 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:57:29 compute-0 nova_compute[351485]: 2025-12-03 01:57:29.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:29 compute-0 podman[158098]: time="2025-12-03T01:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:57:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:57:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8624 "" "Go-http-client/1.1"
Dec  3 01:57:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec  3 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:57:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:57:31 compute-0 openstack_network_exporter[368278]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:57:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:57:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 7.1 KiB/s wr, 1 op/s
Dec  3 01:57:32 compute-0 nova_compute[351485]: 2025-12-03 01:57:32.839 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:34 compute-0 nova_compute[351485]: 2025-12-03 01:57:34.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec  3 01:57:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec  3 01:57:36 compute-0 podman[419438]: 2025-12-03 01:57:36.854390974 +0000 UTC m=+0.117764813 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 01:57:36 compute-0 podman[419439]: 2025-12-03 01:57:36.860155098 +0000 UTC m=+0.109348703 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 01:57:36 compute-0 podman[419443]: 2025-12-03 01:57:36.86022354 +0000 UTC m=+0.097494076 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:57:37 compute-0 nova_compute[351485]: 2025-12-03 01:57:37.843 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:57:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:57:39 compute-0 nova_compute[351485]: 2025-12-03 01:57:39.034 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:57:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5936 writes, 26K keys, 5936 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5936 writes, 5936 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1341 writes, 6083 keys, 1341 commit groups, 1.0 writes per commit group, ingest: 8.80 MB, 0.01 MB/s#012Interval WAL: 1341 writes, 1341 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     94.4      0.32              0.14        15    0.021       0      0       0.0       0.0#012  L6      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    130.3    105.7      0.94              0.46        14    0.067     63K   7810       0.0       0.0#012 Sum      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     97.1    102.8      1.26              0.60        29    0.044     63K   7810       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    110.5    111.6      0.35              0.17         8    0.043     20K   2552       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    130.3    105.7      0.94              0.46        14    0.067     63K   7810       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.1      0.32              0.14        14    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.3 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 13.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000199 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(855,12.57 MB,4.08052%) FilterBlock(30,183.67 KB,0.0582361%) IndexBlock(30,338.95 KB,0.10747%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 01:57:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:42 compute-0 nova_compute[351485]: 2025-12-03 01:57:42.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:42 compute-0 podman[419496]: 2025-12-03 01:57:42.883856482 +0000 UTC m=+0.131708991 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:57:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:44 compute-0 nova_compute[351485]: 2025-12-03 01:57:44.039 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:45 compute-0 podman[419517]: 2025-12-03 01:57:45.868413557 +0000 UTC m=+0.125143504 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public)
Dec  3 01:57:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:57:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41394511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:57:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:57:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41394511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:57:47 compute-0 nova_compute[351485]: 2025-12-03 01:57:47.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:49 compute-0 nova_compute[351485]: 2025-12-03 01:57:49.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:49 compute-0 podman[419537]: 2025-12-03 01:57:49.894948455 +0000 UTC m=+0.143604069 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec  3 01:57:49 compute-0 podman[419538]: 2025-12-03 01:57:49.896470198 +0000 UTC m=+0.138286587 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:57:49 compute-0 podman[419536]: 2025-12-03 01:57:49.943265821 +0000 UTC m=+0.194943281 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:57:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:50 compute-0 podman[419595]: 2025-12-03 01:57:50.883850917 +0000 UTC m=+0.142451846 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:57:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:52 compute-0 nova_compute[351485]: 2025-12-03 01:57:52.852 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:54 compute-0 nova_compute[351485]: 2025-12-03 01:57:54.048 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:57:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 01:57:56 compute-0 nova_compute[351485]: 2025-12-03 01:57:56.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:57 compute-0 nova_compute[351485]: 2025-12-03 01:57:57.855 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:57:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:57:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.580 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.608 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:57:58 compute-0 nova_compute[351485]: 2025-12-03 01:57:58.608 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:57:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:57:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665492698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.145 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.273 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.274 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.275 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.285 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.286 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.287 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:57:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:57:59.623 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:57:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:57:59.624 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:57:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:57:59.624 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:57:59 compute-0 podman[158098]: time="2025-12-03T01:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:57:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:57:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8626 "" "Go-http-client/1.1"
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.844 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.846 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3750MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.846 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.846 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.924 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:57:59 compute-0 nova_compute[351485]: 2025-12-03 01:57:59.925 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.001 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:58:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 01:58:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:58:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834711140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.567 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.580 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.601 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.605 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:58:00 compute-0 nova_compute[351485]: 2025-12-03 01:58:00.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:58:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:58:01 compute-0 openstack_network_exporter[368278]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:58:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:58:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 01:58:02 compute-0 nova_compute[351485]: 2025-12-03 01:58:02.604 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:02 compute-0 nova_compute[351485]: 2025-12-03 01:58:02.604 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:58:02 compute-0 nova_compute[351485]: 2025-12-03 01:58:02.857 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:03 compute-0 nova_compute[351485]: 2025-12-03 01:58:03.182 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:58:03 compute-0 nova_compute[351485]: 2025-12-03 01:58:03.183 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:58:03 compute-0 nova_compute[351485]: 2025-12-03 01:58:03.184 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 01:58:04 compute-0 nova_compute[351485]: 2025-12-03 01:58:04.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 01:58:05 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.234 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.272 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.273 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.274 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.275 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.277 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:05 compute-0 nova_compute[351485]: 2025-12-03 01:58:05.278 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  3 01:58:07 compute-0 nova_compute[351485]: 2025-12-03 01:58:07.860 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:07 compute-0 podman[419668]: 2025-12-03 01:58:07.883454104 +0000 UTC m=+0.125965547 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:58:07 compute-0 podman[419669]: 2025-12-03 01:58:07.905568734 +0000 UTC m=+0.143425424 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:58:07 compute-0 podman[419667]: 2025-12-03 01:58:07.905962295 +0000 UTC m=+0.149134197 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  3 01:58:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:08 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 01:58:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:09 compute-0 nova_compute[351485]: 2025-12-03 01:58:09.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:10 compute-0 nova_compute[351485]: 2025-12-03 01:58:10.245 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:10 compute-0 nova_compute[351485]: 2025-12-03 01:58:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:10 compute-0 nova_compute[351485]: 2025-12-03 01:58:10.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:58:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:12 compute-0 nova_compute[351485]: 2025-12-03 01:58:12.866 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:13 compute-0 podman[419723]: 2025-12-03 01:58:13.877506514 +0000 UTC m=+0.129495358 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:58:14 compute-0 nova_compute[351485]: 2025-12-03 01:58:14.067 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:16 compute-0 podman[419743]: 2025-12-03 01:58:16.890362464 +0000 UTC m=+0.135953540 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_id=edpm, release-0.7.12=, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., version=9.4, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64)
Dec  3 01:58:17 compute-0 nova_compute[351485]: 2025-12-03 01:58:17.865 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:19 compute-0 nova_compute[351485]: 2025-12-03 01:58:19.072 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:20 compute-0 podman[419764]: 2025-12-03 01:58:20.85395208 +0000 UTC m=+0.110583189 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, release=1755695350, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal)
Dec  3 01:58:20 compute-0 podman[419765]: 2025-12-03 01:58:20.891481779 +0000 UTC m=+0.129963161 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 01:58:20 compute-0 podman[419763]: 2025-12-03 01:58:20.912121116 +0000 UTC m=+0.165408510 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  3 01:58:21 compute-0 podman[419825]: 2025-12-03 01:58:21.07523213 +0000 UTC m=+0.137673851 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 01:58:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:22 compute-0 nova_compute[351485]: 2025-12-03 01:58:22.869 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:24 compute-0 nova_compute[351485]: 2025-12-03 01:58:24.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:27 compute-0 nova_compute[351485]: 2025-12-03 01:58:27.873 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:58:28
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:58:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:58:29 compute-0 nova_compute[351485]: 2025-12-03 01:58:29.080 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:58:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 48b916e5-72d2-412a-995f-f4a40c736671 does not exist
Dec  3 01:58:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 826cef08-43ff-40be-b71b-fbef29d88a6f does not exist
Dec  3 01:58:29 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d556bf48-2086-409c-8d42-8672436a7e4e does not exist
Dec  3 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:58:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:58:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:58:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:58:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:30 compute-0 podman[158098]: time="2025-12-03T01:58:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:58:30 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:58:30 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8632 "" "Go-http-client/1.1"
Dec  3 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.497971569 +0000 UTC m=+0.100340218 container create 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.453429471 +0000 UTC m=+0.055798180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:58:30 compute-0 systemd[1]: Started libpod-conmon-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope.
Dec  3 01:58:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.650689916 +0000 UTC m=+0.253058575 container init 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.670804599 +0000 UTC m=+0.273173248 container start 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.678427666 +0000 UTC m=+0.280796305 container attach 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:58:30 compute-0 objective_carver[420138]: 167 167
Dec  3 01:58:30 compute-0 systemd[1]: libpod-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope: Deactivated successfully.
Dec  3 01:58:30 compute-0 conmon[420138]: conmon 8921a87896fce64f7304 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope/container/memory.events
Dec  3 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.686157696 +0000 UTC m=+0.288526335 container died 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 01:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-13004c528b2ac976a4f5923f3611ffb2119d4f27964d7fd9dc89f841a916ac60-merged.mount: Deactivated successfully.
Dec  3 01:58:30 compute-0 podman[420123]: 2025-12-03 01:58:30.759331749 +0000 UTC m=+0.361700368 container remove 8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:58:30 compute-0 systemd[1]: libpod-conmon-8921a87896fce64f7304f4cf61923694e2d6d1d35a5592f89c54f970047ee53e.scope: Deactivated successfully.
Dec  3 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.033474754 +0000 UTC m=+0.084871438 container create 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:30.998639982 +0000 UTC m=+0.050036666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:58:31 compute-0 systemd[1]: Started libpod-conmon-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope.
Dec  3 01:58:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.252830467 +0000 UTC m=+0.304227181 container init 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.283610794 +0000 UTC m=+0.335007458 container start 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 01:58:31 compute-0 podman[420161]: 2025-12-03 01:58:31.291667733 +0000 UTC m=+0.343064417 container attach 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec  3 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:58:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:58:31 compute-0 openstack_network_exporter[368278]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:58:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:58:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:32 compute-0 ecstatic_grothendieck[420177]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:58:32 compute-0 ecstatic_grothendieck[420177]: --> relative data size: 1.0
Dec  3 01:58:32 compute-0 ecstatic_grothendieck[420177]: --> All data devices are unavailable
Dec  3 01:58:32 compute-0 systemd[1]: libpod-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope: Deactivated successfully.
Dec  3 01:58:32 compute-0 podman[420161]: 2025-12-03 01:58:32.618606349 +0000 UTC m=+1.670003033 container died 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:58:32 compute-0 systemd[1]: libpod-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope: Consumed 1.271s CPU time.
Dec  3 01:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc691cae6b2ea53f3513405a37c9a771183bd5e7376b7cd63035a1c32e0771c-merged.mount: Deactivated successfully.
Dec  3 01:58:32 compute-0 podman[420161]: 2025-12-03 01:58:32.729680781 +0000 UTC m=+1.781077435 container remove 172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:58:32 compute-0 systemd[1]: libpod-conmon-172fcfc27e20f940a2e8806f8c39af3f3c97e667fc9ff54456b82712f5123abe.scope: Deactivated successfully.
Dec  3 01:58:32 compute-0 nova_compute[351485]: 2025-12-03 01:58:32.874 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:33 compute-0 podman[420358]: 2025-12-03 01:58:33.943899658 +0000 UTC m=+0.081973155 container create 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:33.913455271 +0000 UTC m=+0.051528848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:58:34 compute-0 systemd[1]: Started libpod-conmon-4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6.scope.
Dec  3 01:58:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:58:34 compute-0 nova_compute[351485]: 2025-12-03 01:58:34.085 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.094860886 +0000 UTC m=+0.232934393 container init 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.112464217 +0000 UTC m=+0.250537744 container start 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.124561101 +0000 UTC m=+0.262634628 container attach 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 01:58:34 compute-0 eager_perlman[420373]: 167 167
Dec  3 01:58:34 compute-0 systemd[1]: libpod-4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6.scope: Deactivated successfully.
Dec  3 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.133685161 +0000 UTC m=+0.271758658 container died 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f14cf6abee0b5088d4be5b7ac072d7836dad75ed175dd289afa0ecb4319eacdb-merged.mount: Deactivated successfully.
Dec  3 01:58:34 compute-0 podman[420358]: 2025-12-03 01:58:34.187103842 +0000 UTC m=+0.325177339 container remove 4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 01:58:34 compute-0 systemd[1]: libpod-conmon-4b7e4862b976ee478d87eb77e9318031978fc07b63dce024d71ad9b3d180b3b6.scope: Deactivated successfully.
Dec  3 01:58:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.458712244 +0000 UTC m=+0.083096897 container create 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.425052515 +0000 UTC m=+0.049437198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:58:34 compute-0 systemd[1]: Started libpod-conmon-917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041.scope.
Dec  3 01:58:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.681269389 +0000 UTC m=+0.305654042 container init 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.700214699 +0000 UTC m=+0.324599342 container start 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:58:34 compute-0 podman[420396]: 2025-12-03 01:58:34.710342226 +0000 UTC m=+0.334726859 container attach 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:58:35 compute-0 zen_gould[420412]: {
Dec  3 01:58:35 compute-0 zen_gould[420412]:    "0": [
Dec  3 01:58:35 compute-0 zen_gould[420412]:        {
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "devices": [
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "/dev/loop3"
Dec  3 01:58:35 compute-0 zen_gould[420412]:            ],
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_name": "ceph_lv0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_size": "21470642176",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "name": "ceph_lv0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "tags": {
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cluster_name": "ceph",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.crush_device_class": "",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.encrypted": "0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osd_id": "0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.type": "block",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.vdo": "0"
Dec  3 01:58:35 compute-0 zen_gould[420412]:            },
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "type": "block",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "vg_name": "ceph_vg0"
Dec  3 01:58:35 compute-0 zen_gould[420412]:        }
Dec  3 01:58:35 compute-0 zen_gould[420412]:    ],
Dec  3 01:58:35 compute-0 zen_gould[420412]:    "1": [
Dec  3 01:58:35 compute-0 zen_gould[420412]:        {
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "devices": [
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "/dev/loop4"
Dec  3 01:58:35 compute-0 zen_gould[420412]:            ],
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_name": "ceph_lv1",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_size": "21470642176",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "name": "ceph_lv1",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "tags": {
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cluster_name": "ceph",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.crush_device_class": "",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.encrypted": "0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osd_id": "1",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.type": "block",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.vdo": "0"
Dec  3 01:58:35 compute-0 zen_gould[420412]:            },
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "type": "block",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "vg_name": "ceph_vg1"
Dec  3 01:58:35 compute-0 zen_gould[420412]:        }
Dec  3 01:58:35 compute-0 zen_gould[420412]:    ],
Dec  3 01:58:35 compute-0 zen_gould[420412]:    "2": [
Dec  3 01:58:35 compute-0 zen_gould[420412]:        {
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "devices": [
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "/dev/loop5"
Dec  3 01:58:35 compute-0 zen_gould[420412]:            ],
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_name": "ceph_lv2",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_size": "21470642176",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "name": "ceph_lv2",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "tags": {
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.cluster_name": "ceph",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.crush_device_class": "",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.encrypted": "0",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osd_id": "2",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.type": "block",
Dec  3 01:58:35 compute-0 zen_gould[420412]:                "ceph.vdo": "0"
Dec  3 01:58:35 compute-0 zen_gould[420412]:            },
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "type": "block",
Dec  3 01:58:35 compute-0 zen_gould[420412]:            "vg_name": "ceph_vg2"
Dec  3 01:58:35 compute-0 zen_gould[420412]:        }
Dec  3 01:58:35 compute-0 zen_gould[420412]:    ]
Dec  3 01:58:35 compute-0 zen_gould[420412]: }
Dec  3 01:58:35 compute-0 systemd[1]: libpod-917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041.scope: Deactivated successfully.
Dec  3 01:58:35 compute-0 podman[420396]: 2025-12-03 01:58:35.56617567 +0000 UTC m=+1.190560313 container died 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 01:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9648888d36fa492062575301eed4001ae2317a57f6ebc263486181729c74e58c-merged.mount: Deactivated successfully.
Dec  3 01:58:35 compute-0 podman[420396]: 2025-12-03 01:58:35.659891968 +0000 UTC m=+1.284276581 container remove 917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:58:35 compute-0 systemd[1]: libpod-conmon-917cadf327ac9a0632cc1c109c5cac2f9570adcc063afd591a6d09aa136b5041.scope: Deactivated successfully.
Dec  3 01:58:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.765910135 +0000 UTC m=+0.067481373 container create 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:58:36 compute-0 systemd[1]: Started libpod-conmon-843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06.scope.
Dec  3 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.739204474 +0000 UTC m=+0.040775702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:58:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.891090888 +0000 UTC m=+0.192662146 container init 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.910375797 +0000 UTC m=+0.211947005 container start 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.914914506 +0000 UTC m=+0.216485734 container attach 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 01:58:36 compute-0 eager_raman[420586]: 167 167
Dec  3 01:58:36 compute-0 systemd[1]: libpod-843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06.scope: Deactivated successfully.
Dec  3 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.922099261 +0000 UTC m=+0.223670469 container died 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3234b79b61a9e1992bf15f7bea7b168ea6877805c9b3f2117e664b23922dcf4-merged.mount: Deactivated successfully.
Dec  3 01:58:36 compute-0 podman[420570]: 2025-12-03 01:58:36.978086075 +0000 UTC m=+0.279657293 container remove 843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 01:58:37 compute-0 systemd[1]: libpod-conmon-843942d364f19739d682b4ef5e8a4d878fbe572b2517e715549bfd178f3cdb06.scope: Deactivated successfully.
Dec  3 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.272755804 +0000 UTC m=+0.114411469 container create c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.224912542 +0000 UTC m=+0.066568277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:58:37 compute-0 systemd[1]: Started libpod-conmon-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope.
Dec  3 01:58:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.402449476 +0000 UTC m=+0.244105151 container init c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.424775321 +0000 UTC m=+0.266430986 container start c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 01:58:37 compute-0 podman[420609]: 2025-12-03 01:58:37.42999235 +0000 UTC m=+0.271648025 container attach c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 01:58:37 compute-0 nova_compute[351485]: 2025-12-03 01:58:37.876 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:58:38 compute-0 charming_lamport[420625]: {
Dec  3 01:58:38 compute-0 charming_lamport[420625]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "osd_id": 2,
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "type": "bluestore"
Dec  3 01:58:38 compute-0 charming_lamport[420625]:    },
Dec  3 01:58:38 compute-0 charming_lamport[420625]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "osd_id": 1,
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "type": "bluestore"
Dec  3 01:58:38 compute-0 charming_lamport[420625]:    },
Dec  3 01:58:38 compute-0 charming_lamport[420625]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "osd_id": 0,
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:58:38 compute-0 charming_lamport[420625]:        "type": "bluestore"
Dec  3 01:58:38 compute-0 charming_lamport[420625]:    }
Dec  3 01:58:38 compute-0 charming_lamport[420625]: }
Dec  3 01:58:38 compute-0 podman[420609]: 2025-12-03 01:58:38.718754528 +0000 UTC m=+1.560410263 container died c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 01:58:38 compute-0 systemd[1]: libpod-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope: Deactivated successfully.
Dec  3 01:58:38 compute-0 systemd[1]: libpod-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope: Consumed 1.272s CPU time.
Dec  3 01:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5ec52496afabfca90bf16574fd515c966616187e23a449fa9a462e42223014-merged.mount: Deactivated successfully.
Dec  3 01:58:38 compute-0 podman[420609]: 2025-12-03 01:58:38.82103477 +0000 UTC m=+1.662690425 container remove c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:58:38 compute-0 systemd[1]: libpod-conmon-c2f1506abf36ea52559d190f4335a2c6696b3b818376b5d03dc80b984e319a7c.scope: Deactivated successfully.
Dec  3 01:58:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:58:38 compute-0 podman[420658]: 2025-12-03 01:58:38.869672424 +0000 UTC m=+0.119876983 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  3 01:58:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:58:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:58:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 46caf98d-7372-4f0f-81b1-1d2f7673d44f does not exist
Dec  3 01:58:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 90d5c649-00da-4989-9d52-48d3b2316891 does not exist
Dec  3 01:58:38 compute-0 podman[420667]: 2025-12-03 01:58:38.886925346 +0000 UTC m=+0.120751109 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:58:38 compute-0 podman[420660]: 2025-12-03 01:58:38.893777261 +0000 UTC m=+0.144624729 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 01:58:39 compute-0 nova_compute[351485]: 2025-12-03 01:58:39.090 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:58:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:58:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:42 compute-0 nova_compute[351485]: 2025-12-03 01:58:42.882 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:44 compute-0 nova_compute[351485]: 2025-12-03 01:58:44.095 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:44 compute-0 podman[420777]: 2025-12-03 01:58:44.843316424 +0000 UTC m=+0.124842505 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:58:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:58:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292056608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:58:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:58:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292056608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:58:47 compute-0 nova_compute[351485]: 2025-12-03 01:58:47.885 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:47 compute-0 podman[420796]: 2025-12-03 01:58:47.898294373 +0000 UTC m=+0.145322928 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, version=9.4, architecture=x86_64, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 01:58:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:49 compute-0 nova_compute[351485]: 2025-12-03 01:58:49.099 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:51 compute-0 podman[420818]: 2025-12-03 01:58:51.860387227 +0000 UTC m=+0.095688445 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, distribution-scope=public)
Dec  3 01:58:51 compute-0 podman[420819]: 2025-12-03 01:58:51.879301625 +0000 UTC m=+0.107178602 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 01:58:51 compute-0 podman[420820]: 2025-12-03 01:58:51.891576005 +0000 UTC m=+0.106709779 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  3 01:58:51 compute-0 podman[420817]: 2025-12-03 01:58:51.907625552 +0000 UTC m=+0.151007810 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 01:58:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:52 compute-0 nova_compute[351485]: 2025-12-03 01:58:52.890 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:54 compute-0 nova_compute[351485]: 2025-12-03 01:58:54.103 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:57 compute-0 nova_compute[351485]: 2025-12-03 01:58:57.892 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:58:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:58:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.108 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.580 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.614 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.615 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.617 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 01:58:59 compute-0 nova_compute[351485]: 2025-12-03 01:58:59.617 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:58:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:58:59.625 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:58:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:58:59.626 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:58:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:58:59.626 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:58:59 compute-0 podman[158098]: time="2025-12-03T01:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:58:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:58:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8634 "" "Go-http-client/1.1"
Dec  3 01:59:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:59:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000760888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.165 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.298 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.298 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.299 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.308 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.309 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.309 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 01:59:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.831 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.832 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3785MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.833 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.833 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.903 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.903 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.903 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.904 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 01:59:00 compute-0 nova_compute[351485]: 2025-12-03 01:59:00.979 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:59:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:59:01 compute-0 openstack_network_exporter[368278]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:59:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:59:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 01:59:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495191346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.483 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.493 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.519 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.521 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 01:59:01 compute-0 nova_compute[351485]: 2025-12-03 01:59:01.521 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:59:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.519 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.519 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.520 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.853 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.854 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.855 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.857 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 01:59:02 compute-0 nova_compute[351485]: 2025-12-03 01:59:02.893 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.111 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.467 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.485 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.486 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.487 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.488 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.489 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:04 compute-0 nova_compute[351485]: 2025-12-03 01:59:04.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:05 compute-0 nova_compute[351485]: 2025-12-03 01:59:05.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:07 compute-0 nova_compute[351485]: 2025-12-03 01:59:07.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:07 compute-0 nova_compute[351485]: 2025-12-03 01:59:07.896 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:09 compute-0 nova_compute[351485]: 2025-12-03 01:59:09.116 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:09 compute-0 podman[420946]: 2025-12-03 01:59:09.852177915 +0000 UTC m=+0.091012075 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 01:59:09 compute-0 podman[420944]: 2025-12-03 01:59:09.857082403 +0000 UTC m=+0.100721088 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 01:59:09 compute-0 podman[420945]: 2025-12-03 01:59:09.895209707 +0000 UTC m=+0.137448893 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 01:59:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:11 compute-0 nova_compute[351485]: 2025-12-03 01:59:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:11 compute-0 nova_compute[351485]: 2025-12-03 01:59:11.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 01:59:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:12 compute-0 nova_compute[351485]: 2025-12-03 01:59:12.901 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.520210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153520264, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 251, "total_data_size": 3483105, "memory_usage": 3531680, "flush_reason": "Manual Compaction"}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153548802, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3428285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25530, "largest_seqno": 27580, "table_properties": {"data_size": 3418745, "index_size": 6098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18680, "raw_average_key_size": 20, "raw_value_size": 3400065, "raw_average_value_size": 3659, "num_data_blocks": 270, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764726918, "oldest_key_time": 1764726918, "file_creation_time": 1764727153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 28698 microseconds, and 13875 cpu microseconds.
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.548903) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3428285 bytes OK
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.548931) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.551913) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.551938) EVENT_LOG_v1 {"time_micros": 1764727153551931, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.551963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3474521, prev total WAL file size 3474521, number of live WAL files 2.
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.554454) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3347KB)], [59(7308KB)]
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153554629, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10911910, "oldest_snapshot_seqno": -1}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5030 keys, 9140224 bytes, temperature: kUnknown
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153651583, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9140224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9104675, "index_size": 21871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 124863, "raw_average_key_size": 24, "raw_value_size": 9011761, "raw_average_value_size": 1791, "num_data_blocks": 907, "num_entries": 5030, "num_filter_entries": 5030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.651856) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9140224 bytes
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.654796) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.5 rd, 94.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5544, records dropped: 514 output_compression: NoCompression
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.654832) EVENT_LOG_v1 {"time_micros": 1764727153654815, "job": 32, "event": "compaction_finished", "compaction_time_micros": 97027, "compaction_time_cpu_micros": 46051, "output_level": 6, "num_output_files": 1, "total_output_size": 9140224, "num_input_records": 5544, "num_output_records": 5030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153656321, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727153659362, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.553891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:59:13 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-01:59:13.659760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 01:59:14 compute-0 nova_compute[351485]: 2025-12-03 01:59:14.121 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:15 compute-0 podman[421006]: 2025-12-03 01:59:15.881297073 +0000 UTC m=+0.132791292 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 01:59:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:17 compute-0 nova_compute[351485]: 2025-12-03 01:59:17.903 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:18 compute-0 podman[421025]: 2025-12-03 01:59:18.859613216 +0000 UTC m=+0.117554832 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc.)
Dec  3 01:59:19 compute-0 nova_compute[351485]: 2025-12-03 01:59:19.125 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.504 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.505 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.505 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95ea43dd00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.514 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.518 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.519 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.519 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T01:59:19.520153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.560 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.596 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 49.0390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T01:59:19.599226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.605 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.612 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T01:59:19.613750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.614 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.614 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T01:59:19.616306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.617 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.618 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T01:59:19.620015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.620 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.621 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T01:59:19.622480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.623 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.623 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T01:59:19.624964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.656 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.657 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.657 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.690 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.692 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.692 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.694 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T01:59:19.695441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.775 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.776 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.777 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.869 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.870 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.871 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.872 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T01:59:19.873419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.874 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.874 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 1878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.875 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T01:59:19.876202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.876 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.877 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.877 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.878 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.878 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.878 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.880 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T01:59:19.881316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.881 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.882 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.882 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.883 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.884 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.884 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T01:59:19.886815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.887 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.887 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.888 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T01:59:19.889353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.890 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.890 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.890 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.891 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.891 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.892 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.894 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.894 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.895 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.895 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.896 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.896 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T01:59:19.893863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T01:59:19.898685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.899 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6964190045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.899 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.900 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.900 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.900 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.901 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T01:59:19.903863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.904 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.904 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.905 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.905 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.906 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.906 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T01:59:19.908045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T01:59:19.909653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.909 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 154220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 36610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.910 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T01:59:19.911299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.912 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 4826 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T01:59:19.912704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T01:59:19.914294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.914 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.915 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T01:59:19.917052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T01:59:19.919060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.920 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T01:59:19.920624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.921 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.923 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.924 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 01:59:19.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 01:59:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:22 compute-0 podman[421048]: 2025-12-03 01:59:22.862883846 +0000 UTC m=+0.105956285 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:59:22 compute-0 podman[421047]: 2025-12-03 01:59:22.884774343 +0000 UTC m=+0.133670316 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41)
Dec  3 01:59:22 compute-0 podman[421054]: 2025-12-03 01:59:22.885446142 +0000 UTC m=+0.115949087 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 01:59:22 compute-0 podman[421046]: 2025-12-03 01:59:22.888431926 +0000 UTC m=+0.152585369 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec  3 01:59:22 compute-0 nova_compute[351485]: 2025-12-03 01:59:22.905 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:24 compute-0 nova_compute[351485]: 2025-12-03 01:59:24.130 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:27 compute-0 nova_compute[351485]: 2025-12-03 01:59:27.909 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_01:59:28
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:59:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 01:59:29 compute-0 nova_compute[351485]: 2025-12-03 01:59:29.134 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:29 compute-0 podman[158098]: time="2025-12-03T01:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:59:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:59:29 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec  3 01:59:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 01:59:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:59:31 compute-0 openstack_network_exporter[368278]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 01:59:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 01:59:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:32 compute-0 nova_compute[351485]: 2025-12-03 01:59:32.911 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:34 compute-0 nova_compute[351485]: 2025-12-03 01:59:34.141 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:37 compute-0 nova_compute[351485]: 2025-12-03 01:59:37.914 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011048885483818454 of space, bias 1.0, pg target 0.33146656451455364 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 01:59:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 01:59:39 compute-0 nova_compute[351485]: 2025-12-03 01:59:39.148 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:59:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1a8310ee-4889-494b-9a16-96c42de02887 does not exist
Dec  3 01:59:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 51e167a9-95f3-49ed-a936-70a3f1c1736b does not exist
Dec  3 01:59:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1018ab37-c28e-41ef-9033-744c9d25a660 does not exist
Dec  3 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:59:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 01:59:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:59:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 01:59:40 compute-0 podman[421285]: 2025-12-03 01:59:40.864725932 +0000 UTC m=+0.116559914 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 01:59:40 compute-0 podman[421283]: 2025-12-03 01:59:40.865766211 +0000 UTC m=+0.139556591 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  3 01:59:40 compute-0 podman[421284]: 2025-12-03 01:59:40.871675368 +0000 UTC m=+0.143545384 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  3 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.56658122 +0000 UTC m=+0.099422152 container create ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec  3 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.519498674 +0000 UTC m=+0.052339696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:59:41 compute-0 systemd[1]: Started libpod-conmon-ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac.scope.
Dec  3 01:59:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.713891479 +0000 UTC m=+0.246732441 container init ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.732092011 +0000 UTC m=+0.264932983 container start ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.737677859 +0000 UTC m=+0.270518831 container attach ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:59:41 compute-0 interesting_vaughan[421473]: 167 167
Dec  3 01:59:41 compute-0 systemd[1]: libpod-ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac.scope: Deactivated successfully.
Dec  3 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.750952052 +0000 UTC m=+0.283793044 container died ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 01:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b43dd4f0179b4acbfd98621ca50e452e96411d78c337beb6a6998c507c51f6e-merged.mount: Deactivated successfully.
Dec  3 01:59:41 compute-0 podman[421457]: 2025-12-03 01:59:41.835936466 +0000 UTC m=+0.368777438 container remove ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:59:41 compute-0 systemd[1]: libpod-conmon-ad4991d21701bae496a73898260c6d020b2df4374e0ba9ec63ae91f170caecac.scope: Deactivated successfully.
Dec  3 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.147501701 +0000 UTC m=+0.101303164 container create 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.114049009 +0000 UTC m=+0.067850552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:59:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:59:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6543 writes, 26K keys, 6543 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6543 writes, 1250 syncs, 5.23 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 641 writes, 2055 keys, 641 commit groups, 1.0 writes per commit group, ingest: 2.25 MB, 0.00 MB/s#012Interval WAL: 641 writes, 259 syncs, 2.47 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 01:59:42 compute-0 systemd[1]: Started libpod-conmon-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope.
Dec  3 01:59:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.312163729 +0000 UTC m=+0.265965192 container init 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.328615942 +0000 UTC m=+0.282417405 container start 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 01:59:42 compute-0 podman[421496]: 2025-12-03 01:59:42.333110719 +0000 UTC m=+0.286912182 container attach 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:59:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:42 compute-0 nova_compute[351485]: 2025-12-03 01:59:42.915 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:43 compute-0 strange_volhard[421511]: --> passed data devices: 0 physical, 3 LVM
Dec  3 01:59:43 compute-0 strange_volhard[421511]: --> relative data size: 1.0
Dec  3 01:59:43 compute-0 strange_volhard[421511]: --> All data devices are unavailable
Dec  3 01:59:43 compute-0 systemd[1]: libpod-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope: Deactivated successfully.
Dec  3 01:59:43 compute-0 systemd[1]: libpod-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope: Consumed 1.234s CPU time.
Dec  3 01:59:43 compute-0 podman[421496]: 2025-12-03 01:59:43.643656729 +0000 UTC m=+1.597458222 container died 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-957b9a1fa41dbd703e0ce6ebcf4cab184f248b15645f35a6263928346e87a400-merged.mount: Deactivated successfully.
Dec  3 01:59:43 compute-0 podman[421496]: 2025-12-03 01:59:43.759137392 +0000 UTC m=+1.712938855 container remove 8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_volhard, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 01:59:43 compute-0 systemd[1]: libpod-conmon-8791ceeccd38951c9d544e72bb3522ece72aae1f69b8f5809cc1c1f3fd99d072.scope: Deactivated successfully.
Dec  3 01:59:44 compute-0 nova_compute[351485]: 2025-12-03 01:59:44.152 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:44 compute-0 podman[421693]: 2025-12-03 01:59:44.895778055 +0000 UTC m=+0.082471234 container create e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 01:59:44 compute-0 podman[421693]: 2025-12-03 01:59:44.861320715 +0000 UTC m=+0.048013974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:59:44 compute-0 systemd[1]: Started libpod-conmon-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope.
Dec  3 01:59:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.011495984 +0000 UTC m=+0.198189183 container init e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Dec  3 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.022432322 +0000 UTC m=+0.209125541 container start e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.029213533 +0000 UTC m=+0.215906732 container attach e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 01:59:45 compute-0 wonderful_montalcini[421709]: 167 167
Dec  3 01:59:45 compute-0 systemd[1]: libpod-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope: Deactivated successfully.
Dec  3 01:59:45 compute-0 conmon[421709]: conmon e873cceb582c393b10d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope/container/memory.events
Dec  3 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.035427548 +0000 UTC m=+0.222120737 container died e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 01:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8ee6a01a6ff321a65af10dc291d4b0b0f45a0344dcf86457b535ba4f1560266-merged.mount: Deactivated successfully.
Dec  3 01:59:45 compute-0 podman[421693]: 2025-12-03 01:59:45.098875885 +0000 UTC m=+0.285569074 container remove e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 01:59:45 compute-0 systemd[1]: libpod-conmon-e873cceb582c393b10d2acdff3bbdb6d6ec7d372610d7b64a1cf17bee0d2080e.scope: Deactivated successfully.
Dec  3 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.364938959 +0000 UTC m=+0.087852125 container create 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.330938701 +0000 UTC m=+0.053851937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:59:45 compute-0 systemd[1]: Started libpod-conmon-6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8.scope.
Dec  3 01:59:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.53005781 +0000 UTC m=+0.252970986 container init 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.54710356 +0000 UTC m=+0.270016716 container start 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:59:45 compute-0 podman[421736]: 2025-12-03 01:59:45.552216014 +0000 UTC m=+0.275129220 container attach 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 01:59:46 compute-0 quirky_shamir[421751]: {
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:    "0": [
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:        {
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "devices": [
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "/dev/loop3"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            ],
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_name": "ceph_lv0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_size": "21470642176",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "name": "ceph_lv0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "tags": {
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cluster_name": "ceph",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.crush_device_class": "",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.encrypted": "0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osd_id": "0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.type": "block",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.vdo": "0"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            },
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "type": "block",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "vg_name": "ceph_vg0"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:        }
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:    ],
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:    "1": [
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:        {
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "devices": [
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "/dev/loop4"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            ],
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_name": "ceph_lv1",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_size": "21470642176",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "name": "ceph_lv1",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "tags": {
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cluster_name": "ceph",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.crush_device_class": "",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.encrypted": "0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osd_id": "1",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.type": "block",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.vdo": "0"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            },
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "type": "block",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "vg_name": "ceph_vg1"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:        }
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:    ],
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:    "2": [
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:        {
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "devices": [
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "/dev/loop5"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            ],
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_name": "ceph_lv2",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_size": "21470642176",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "name": "ceph_lv2",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "tags": {
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cephx_lockbox_secret": "",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.cluster_name": "ceph",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.crush_device_class": "",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.encrypted": "0",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osd_id": "2",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.type": "block",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:                "ceph.vdo": "0"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            },
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "type": "block",
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:            "vg_name": "ceph_vg2"
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:        }
Dec  3 01:59:46 compute-0 quirky_shamir[421751]:    ]
Dec  3 01:59:46 compute-0 quirky_shamir[421751]: }
Dec  3 01:59:46 compute-0 systemd[1]: libpod-6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8.scope: Deactivated successfully.
Dec  3 01:59:46 compute-0 podman[421736]: 2025-12-03 01:59:46.361800605 +0000 UTC m=+1.084713861 container died 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 01:59:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-81142c2cc8a89cfac84eaae2ad044fd2e95cc5c24bd343a534583f71ad1740f8-merged.mount: Deactivated successfully.
Dec  3 01:59:46 compute-0 podman[421736]: 2025-12-03 01:59:46.468879981 +0000 UTC m=+1.191793147 container remove 6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 01:59:46 compute-0 systemd[1]: libpod-conmon-6bafcd6fc27e76b2dfb309599cecbf3f70f1d381371dd7e39ab41fef09e315b8.scope: Deactivated successfully.
Dec  3 01:59:46 compute-0 podman[421761]: 2025-12-03 01:59:46.541218508 +0000 UTC m=+0.142646608 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 01:59:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 01:59:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638375043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 01:59:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 01:59:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638375043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.617268056 +0000 UTC m=+0.075249461 container create 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.584431411 +0000 UTC m=+0.042412846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:59:47 compute-0 systemd[1]: Started libpod-conmon-98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532.scope.
Dec  3 01:59:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.75339855 +0000 UTC m=+0.211380015 container init 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.770062669 +0000 UTC m=+0.228044064 container start 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.77506863 +0000 UTC m=+0.233050065 container attach 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 01:59:47 compute-0 interesting_villani[421944]: 167 167
Dec  3 01:59:47 compute-0 systemd[1]: libpod-98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532.scope: Deactivated successfully.
Dec  3 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.783086016 +0000 UTC m=+0.241067441 container died 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-37fcca9bcce26c47610b46d30a47676e16328643624f5e318ad706c45c4aaa68-merged.mount: Deactivated successfully.
Dec  3 01:59:47 compute-0 podman[421928]: 2025-12-03 01:59:47.874670625 +0000 UTC m=+0.332652040 container remove 98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 01:59:47 compute-0 systemd[1]: libpod-conmon-98c52da358e806830a27d6e069a0c8f2ae4668624066832c56a6f41960fc9532.scope: Deactivated successfully.
Dec  3 01:59:47 compute-0 nova_compute[351485]: 2025-12-03 01:59:47.917 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.15851216 +0000 UTC m=+0.104396762 container create d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.117458423 +0000 UTC m=+0.063343065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 01:59:48 compute-0 systemd[1]: Started libpod-conmon-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope.
Dec  3 01:59:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.346179735 +0000 UTC m=+0.292064337 container init d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.362258898 +0000 UTC m=+0.308143470 container start d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 01:59:48 compute-0 podman[421967]: 2025-12-03 01:59:48.367791324 +0000 UTC m=+0.313675936 container attach d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec  3 01:59:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:59:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 7821 writes, 31K keys, 7821 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7821 writes, 1612 syncs, 4.85 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 721 writes, 2335 keys, 721 commit groups, 1.0 writes per commit group, ingest: 2.50 MB, 0.00 MB/s#012Interval WAL: 721 writes, 280 syncs, 2.58 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 01:59:49 compute-0 nova_compute[351485]: 2025-12-03 01:59:49.156 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:49 compute-0 laughing_feistel[421981]: {
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "osd_id": 2,
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "type": "bluestore"
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:    },
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "osd_id": 1,
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "type": "bluestore"
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:    },
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "osd_id": 0,
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:        "type": "bluestore"
Dec  3 01:59:49 compute-0 laughing_feistel[421981]:    }
Dec  3 01:59:49 compute-0 laughing_feistel[421981]: }
Dec  3 01:59:49 compute-0 systemd[1]: libpod-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope: Deactivated successfully.
Dec  3 01:59:49 compute-0 systemd[1]: libpod-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope: Consumed 1.180s CPU time.
Dec  3 01:59:49 compute-0 podman[421967]: 2025-12-03 01:59:49.543300552 +0000 UTC m=+1.489185144 container died d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 01:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd864cbc9ad34d9b568167a6e276c3767ede763dc7ee9e9054e901ceba23fc5b-merged.mount: Deactivated successfully.
Dec  3 01:59:49 compute-0 podman[421967]: 2025-12-03 01:59:49.650427359 +0000 UTC m=+1.596311951 container remove d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feistel, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 01:59:49 compute-0 systemd[1]: libpod-conmon-d71d4d8bd84e2053a7a2e06bb058c3307211caf2879d1220377ed7d89d06564e.scope: Deactivated successfully.
Dec  3 01:59:49 compute-0 podman[422016]: 2025-12-03 01:59:49.694627714 +0000 UTC m=+0.111141121 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, version=9.4, name=ubi9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec  3 01:59:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 01:59:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:59:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 01:59:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:59:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 07cb493a-7e00-42d3-a379-0a323afc73e8 does not exist
Dec  3 01:59:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2ae93c6f-210b-4f61-8942-86e77ef060de does not exist
Dec  3 01:59:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:59:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 01:59:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:52 compute-0 nova_compute[351485]: 2025-12-03 01:59:52.923 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:53 compute-0 podman[422094]: 2025-12-03 01:59:53.866773711 +0000 UTC m=+0.112749387 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec  3 01:59:53 compute-0 podman[422095]: 2025-12-03 01:59:53.87315126 +0000 UTC m=+0.115193205 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 01:59:53 compute-0 podman[422093]: 2025-12-03 01:59:53.915801471 +0000 UTC m=+0.167104257 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 01:59:53 compute-0 podman[422096]: 2025-12-03 01:59:53.918262351 +0000 UTC m=+0.151774956 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 01:59:54 compute-0 nova_compute[351485]: 2025-12-03 01:59:54.161 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 01:59:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6387 writes, 26K keys, 6387 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6387 writes, 1201 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 498 writes, 1537 keys, 498 commit groups, 1.0 writes per commit group, ingest: 1.24 MB, 0.00 MB/s#012Interval WAL: 498 writes, 203 syncs, 2.45 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 01:59:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 01:59:57 compute-0 nova_compute[351485]: 2025-12-03 01:59:57.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 01:59:57 compute-0 nova_compute[351485]: 2025-12-03 01:59:57.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 01:59:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 01:59:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 01:59:59 compute-0 nova_compute[351485]: 2025-12-03 01:59:59.172 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 01:59:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:59:59.626 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 01:59:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:59:59.627 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 01:59:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 01:59:59.628 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 01:59:59 compute-0 podman[158098]: time="2025-12-03T01:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 01:59:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 01:59:59 compute-0 podman[158098]: @ - - [03/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec  3 02:00:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:00 compute-0 nova_compute[351485]: 2025-12-03 02:00:00.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:00 compute-0 nova_compute[351485]: 2025-12-03 02:00:00.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:00:01 compute-0 nova_compute[351485]: 2025-12-03 02:00:01.293 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:00:01 compute-0 nova_compute[351485]: 2025-12-03 02:00:01.294 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:00:01 compute-0 nova_compute[351485]: 2025-12-03 02:00:01.294 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:00:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:00:01 compute-0 openstack_network_exporter[368278]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:00:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:00:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.628 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.656 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.657 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.658 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.658 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.659 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.687 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.688 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.688 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.689 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.689 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:02 compute-0 nova_compute[351485]: 2025-12-03 02:00:02.927 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:00:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967752976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.215 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.366 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.367 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.367 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.377 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.377 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.378 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.867 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.868 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3740MB free_disk=59.922000885009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.868 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.868 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.949 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.949 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.950 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.950 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.964 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.993 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:00:03 compute-0 nova_compute[351485]: 2025-12-03 02:00:03.993 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.012 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.051 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.132 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.175 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:00:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530235725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.659 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.667 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.681 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.682 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:00:04 compute-0 nova_compute[351485]: 2025-12-03 02:00:04.682 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:06 compute-0 nova_compute[351485]: 2025-12-03 02:00:06.601 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:06 compute-0 nova_compute[351485]: 2025-12-03 02:00:06.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:06 compute-0 nova_compute[351485]: 2025-12-03 02:00:06.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:07 compute-0 nova_compute[351485]: 2025-12-03 02:00:07.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:07 compute-0 nova_compute[351485]: 2025-12-03 02:00:07.933 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:09 compute-0 nova_compute[351485]: 2025-12-03 02:00:09.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:11.636 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:00:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:11.638 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:00:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:11.640 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:00:11 compute-0 nova_compute[351485]: 2025-12-03 02:00:11.645 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:11 compute-0 podman[422225]: 2025-12-03 02:00:11.852629126 +0000 UTC m=+0.093131204 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:00:11 compute-0 podman[422223]: 2025-12-03 02:00:11.864794089 +0000 UTC m=+0.122981545 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  3 02:00:11 compute-0 podman[422224]: 2025-12-03 02:00:11.874153232 +0000 UTC m=+0.119043674 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  3 02:00:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:12 compute-0 nova_compute[351485]: 2025-12-03 02:00:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:00:12 compute-0 nova_compute[351485]: 2025-12-03 02:00:12.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:00:12 compute-0 nova_compute[351485]: 2025-12-03 02:00:12.935 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:14 compute-0 nova_compute[351485]: 2025-12-03 02:00:14.184 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.339 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.340 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.363 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:00:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.467 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.468 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.481 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.482 351492 INFO nova.compute.claims [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:00:16 compute-0 nova_compute[351485]: 2025-12-03 02:00:16.670 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:16 compute-0 podman[422283]: 2025-12-03 02:00:16.888800328 +0000 UTC m=+0.144224483 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:00:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:00:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446501929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.194 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.209 351492 DEBUG nova.compute.provider_tree [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.235 351492 DEBUG nova.scheduler.client.report [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.270 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.271 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.321 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.322 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.345 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.382 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.461 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.463 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.464 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Creating image(s)#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.500 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.555 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.603 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.613 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.700 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.701 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.702 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.702 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.735 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.759 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:17 compute-0 nova_compute[351485]: 2025-12-03 02:00:17.935 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.114 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.292 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:00:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.522 351492 DEBUG nova.objects.instance [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.580 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.633 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.643 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.729 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.730 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.731 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.732 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.782 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:18 compute-0 nova_compute[351485]: 2025-12-03 02:00:18.791 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.188 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.306 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.480 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Successfully updated port: d0c565d0-5299-45e5-84ac-ea722711af3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.567 351492 DEBUG nova.compute.manager [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.567 351492 DEBUG nova.compute.manager [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing instance network info cache due to event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.568 351492 DEBUG oslo_concurrency.lockutils [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.568 351492 DEBUG oslo_concurrency.lockutils [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.568 351492 DEBUG nova.network.neutron [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing network info cache for port d0c565d0-5299-45e5-84ac-ea722711af3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.569 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.582 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.583 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Ensure instance console log exists: /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.583 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.583 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.584 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:19 compute-0 nova_compute[351485]: 2025-12-03 02:00:19.753 351492 DEBUG nova.network.neutron [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.177 351492 DEBUG nova.network.neutron [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.195 351492 DEBUG oslo_concurrency.lockutils [req-a4fb9f20-73e9-4a72-ac6a-6cd6885bd56e req-915728fb-9341-4188-9246-9f754e39e23a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.195 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.196 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:00:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 154 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 746 KiB/s wr, 14 op/s
Dec  3 02:00:20 compute-0 nova_compute[351485]: 2025-12-03 02:00:20.429 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:00:20 compute-0 podman[422620]: 2025-12-03 02:00:20.889377614 +0000 UTC m=+0.141232039 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.389 351492 DEBUG nova.network.neutron [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.418 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.418 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance network_info: |[{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.423 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start _get_guest_xml network_info=[{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.434 351492 WARNING nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.441 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.443 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.448 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.449 351492 DEBUG nova.virt.libvirt.host [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.450 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.451 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.452 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.453 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.453 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.454 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.455 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.456 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.456 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.457 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.458 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.458 351492 DEBUG nova.virt.hardware [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.464 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:00:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4161021410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:00:21 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.995 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:21.999 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 170 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec  3 02:00:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:00:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545262422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.494 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.558 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.575 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:22 compute-0 nova_compute[351485]: 2025-12-03 02:00:22.939 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:00:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3240562883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.095 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.098 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',id=3,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-7757xffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:00:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  3 02:00:23 compute-0 nova_compute[351485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.099 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.102 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.106 351492 DEBUG nova.objects.instance [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.143 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <uuid>55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274</uuid>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <name>instance-00000003</name>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <memory>524288</memory>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <nova:name>vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j</nova:name>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:00:21</nova:creationTime>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <nova:flavor name="m1.small">
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:memory>512</nova:memory>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <nova:port uuid="d0c565d0-5299-45e5-84ac-ea722711af3d">
Dec  3 02:00:23 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="192.168.0.227" ipVersion="4"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <system>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <entry name="serial">55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274</entry>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <entry name="uuid">55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274</entry>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </system>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <os>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  </os>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <features>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  </features>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk">
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </source>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.eph0">
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </source>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <target dev="vdb" bus="virtio"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config">
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </source>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:00:23 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:de:1b:b0"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <target dev="tapd0c565d0-52"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/console.log" append="off"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <video>
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </video>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:00:23 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:00:23 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:00:23 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:00:23 compute-0 nova_compute[351485]: </domain>
Dec  3 02:00:23 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.147 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Preparing to wait for external event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.148 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.149 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.149 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.151 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',id=3,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-7757xffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:00:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  3 02:00:23 compute-0 nova_compute[351485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.152 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.154 351492 DEBUG nova.network.os_vif_util [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.156 351492 DEBUG os_vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.158 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.159 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.160 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.166 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.167 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0c565d0-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.168 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0c565d0-52, col_values=(('external_ids', {'iface-id': 'd0c565d0-5299-45e5-84ac-ea722711af3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:1b:b0', 'vm-uuid': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:00:23 compute-0 NetworkManager[48912]: <info>  [1764727223.1742] manager: (tapd0c565d0-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.197 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:00:23 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:00:23.098 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-93 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.197 351492 INFO os_vif [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52')#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.288 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.289 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.289 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.290 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:de:1b:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.291 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Using config drive#033[00m
Dec  3 02:00:23 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:00:23.151 351492 DEBUG nova.virt.libvirt.vif [None req-a64fc55b-93 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 02:00:23 compute-0 nova_compute[351485]: 2025-12-03 02:00:23.348 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  3 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.514 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Creating config drive at /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config#033[00m
Dec  3 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.527 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphn03v_ef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.680 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphn03v_ef" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.769 351492 DEBUG nova.storage.rbd_utils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:00:24 compute-0 nova_compute[351485]: 2025-12-03 02:00:24.792 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:00:24 compute-0 podman[422769]: 2025-12-03 02:00:24.870713117 +0000 UTC m=+0.103986910 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:00:24 compute-0 podman[422771]: 2025-12-03 02:00:24.899316792 +0000 UTC m=+0.125992629 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  3 02:00:24 compute-0 podman[422768]: 2025-12-03 02:00:24.910747514 +0000 UTC m=+0.151908209 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:00:24 compute-0 podman[422759]: 2025-12-03 02:00:24.923823293 +0000 UTC m=+0.170308958 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.027 351492 DEBUG oslo_concurrency.processutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.028 351492 INFO nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deleting local config drive /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.config because it was imported into RBD.#033[00m
Dec  3 02:00:25 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 02:00:25 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 02:00:25 compute-0 kernel: tapd0c565d0-52: entered promiscuous mode
Dec  3 02:00:25 compute-0 NetworkManager[48912]: <info>  [1764727225.1691] manager: (tapd0c565d0-52): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  3 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00040|binding|INFO|Claiming lport d0c565d0-5299-45e5-84ac-ea722711af3d for this chassis.
Dec  3 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00041|binding|INFO|d0c565d0-5299-45e5-84ac-ea722711af3d: Claiming fa:16:3e:de:1b:b0 192.168.0.227
Dec  3 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.173 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.191 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:1b:b0 192.168.0.227'], port_security=['fa:16:3e:de:1b:b0 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d0c565d0-5299-45e5-84ac-ea722711af3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.196 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d0c565d0-5299-45e5-84ac-ea722711af3d in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis#033[00m
Dec  3 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00042|binding|INFO|Setting lport d0c565d0-5299-45e5-84ac-ea722711af3d ovn-installed in OVS
Dec  3 02:00:25 compute-0 ovn_controller[89134]: 2025-12-03T02:00:25Z|00043|binding|INFO|Setting lport d0c565d0-5299-45e5-84ac-ea722711af3d up in Southbound
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.203 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d#033[00m
Dec  3 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.210 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.220 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.223 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[257213a7-55b9-4ae4-bdad-97fa4cf7cc07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:00:25 compute-0 systemd-machined[138558]: New machine qemu-3-instance-00000003.
Dec  3 02:00:25 compute-0 systemd-udevd[422906]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:00:25 compute-0 NetworkManager[48912]: <info>  [1764727225.2593] device (tapd0c565d0-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:00:25 compute-0 NetworkManager[48912]: <info>  [1764727225.2603] device (tapd0c565d0-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:00:25 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.271 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[635ad974-eb45-412f-854a-1b37263acf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.275 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[7963167d-388e-4138-a0aa-999c5af969bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.303 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3630ec-c87d-4b9d-aed0-713d3220fe9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.334 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e442072a-2d6c-4cef-8bc5-d7049dd90875]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 36425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422917, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.359 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[90a05ca3-6162-4929-ae97-af278493e743]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422919, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422919, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.362 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:00:25 compute-0 nova_compute[351485]: 2025-12-03 02:00:25.366 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.368 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.368 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.369 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:00:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:25.370 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:00:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 02:00:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 02:00:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.511 351492 DEBUG nova.compute.manager [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.512 351492 DEBUG oslo_concurrency.lockutils [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.513 351492 DEBUG oslo_concurrency.lockutils [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.513 351492 DEBUG oslo_concurrency.lockutils [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.515 351492 DEBUG nova.compute.manager [req-3ae5947a-0880-484b-9022-be866d745edf req-c812f328-25ec-42f2-8d72-a4562bee6a2e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Processing event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.754 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727226.7532237, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.754 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Started (Lifecycle Event)#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.758 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.767 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.775 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.788 351492 INFO nova.virt.libvirt.driver [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance spawned successfully.#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.789 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.793 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.835 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.836 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.836 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.837 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.838 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.839 351492 DEBUG nova.virt.libvirt.driver [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.886 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.886 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727226.7537677, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.887 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.917 351492 INFO nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 9.46 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.918 351492 DEBUG nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.921 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.938 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727226.7631435, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.938 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.974 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.982 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:00:26 compute-0 nova_compute[351485]: 2025-12-03 02:00:26.999 351492 INFO nova.compute.manager [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 10.57 seconds to build instance.#033[00m
Dec  3 02:00:27 compute-0 nova_compute[351485]: 2025-12-03 02:00:27.023 351492 DEBUG oslo_concurrency.lockutils [None req-a64fc55b-936f-4e39-8da5-e61bcb1bbf32 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:27 compute-0 nova_compute[351485]: 2025-12-03 02:00:27.942 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.172 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:00:28
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.mgr', 'backups']
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec  3 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.431 351492 DEBUG nova.compute.manager [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.431 351492 DEBUG oslo_concurrency.lockutils [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.431 351492 DEBUG oslo_concurrency.lockutils [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.432 351492 DEBUG oslo_concurrency.lockutils [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.432 351492 DEBUG nova.compute.manager [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] No waiting events found dispatching network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:00:28 compute-0 nova_compute[351485]: 2025-12-03 02:00:28.432 351492 WARNING nova.compute.manager [req-34da7fbc-4fb7-42ee-a591-6df2b13e28a7 req-92099847-48e6-43cc-beea-f35989feb6d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received unexpected event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d for instance with vm_state active and task_state None.#033[00m
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:00:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:00:29 compute-0 podman[158098]: time="2025-12-03T02:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:00:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:00:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec  3 02:00:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 491 KiB/s rd, 1.4 MiB/s wr, 64 op/s
Dec  3 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:00:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:00:31 compute-0 openstack_network_exporter[368278]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:00:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:00:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 664 KiB/s wr, 74 op/s
Dec  3 02:00:32 compute-0 nova_compute[351485]: 2025-12-03 02:00:32.944 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:33 compute-0 nova_compute[351485]: 2025-12-03 02:00:33.174 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 71 op/s
Dec  3 02:00:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  3 02:00:37 compute-0 nova_compute[351485]: 2025-12-03 02:00:37.947 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:38 compute-0 nova_compute[351485]: 2025-12-03 02:00:38.177 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 57 op/s
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013737500610470795 of space, bias 1.0, pg target 0.41212501831412385 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:00:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:00:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 57 op/s
Dec  3 02:00:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 33 op/s
Dec  3 02:00:42 compute-0 podman[423001]: 2025-12-03 02:00:42.866645046 +0000 UTC m=+0.110947956 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:00:42 compute-0 podman[423002]: 2025-12-03 02:00:42.887141553 +0000 UTC m=+0.130373383 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Dec  3 02:00:42 compute-0 podman[423003]: 2025-12-03 02:00:42.905370387 +0000 UTC m=+0.142242658 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:00:42 compute-0 nova_compute[351485]: 2025-12-03 02:00:42.950 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.061733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243061762, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 935, "num_deletes": 250, "total_data_size": 1327700, "memory_usage": 1348184, "flush_reason": "Manual Compaction"}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243072309, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 802916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27581, "largest_seqno": 28515, "table_properties": {"data_size": 799192, "index_size": 1440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9812, "raw_average_key_size": 20, "raw_value_size": 791249, "raw_average_value_size": 1665, "num_data_blocks": 65, "num_entries": 475, "num_filter_entries": 475, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727154, "oldest_key_time": 1764727154, "file_creation_time": 1764727243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10637 microseconds, and 3213 cpu microseconds.
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.072357) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 802916 bytes OK
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.072383) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.076625) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.076638) EVENT_LOG_v1 {"time_micros": 1764727243076634, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.076653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1323209, prev total WAL file size 1323209, number of live WAL files 2.
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.077481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(784KB)], [62(8926KB)]
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243077610, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 9943140, "oldest_snapshot_seqno": -1}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5032 keys, 7156338 bytes, temperature: kUnknown
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243136948, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7156338, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7124440, "index_size": 18220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125062, "raw_average_key_size": 24, "raw_value_size": 7035057, "raw_average_value_size": 1398, "num_data_blocks": 758, "num_entries": 5032, "num_filter_entries": 5032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.137267) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7156338 bytes
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.141268) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.3 rd, 120.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(21.3) write-amplify(8.9) OK, records in: 5505, records dropped: 473 output_compression: NoCompression
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.141307) EVENT_LOG_v1 {"time_micros": 1764727243141290, "job": 34, "event": "compaction_finished", "compaction_time_micros": 59428, "compaction_time_cpu_micros": 34847, "output_level": 6, "num_output_files": 1, "total_output_size": 7156338, "num_input_records": 5505, "num_output_records": 5032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243142294, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727243146245, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.077195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:00:43 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:00:43.146701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:00:43 compute-0 nova_compute[351485]: 2025-12-03 02:00:43.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 9 op/s
Dec  3 02:00:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:00:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1856709503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:00:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:00:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1856709503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:00:47 compute-0 podman[423059]: 2025-12-03 02:00:47.906902363 +0000 UTC m=+0.159566156 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  3 02:00:47 compute-0 nova_compute[351485]: 2025-12-03 02:00:47.952 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:48 compute-0 nova_compute[351485]: 2025-12-03 02:00:48.182 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:00:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1964bf0c-99d1-4a3b-b321-26f7ad476fe4 does not exist
Dec  3 02:00:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d27a319b-0043-4db3-bdc5-4eb11c1e4845 does not exist
Dec  3 02:00:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 81ee39bb-7d62-4b84-b506-59eb4598e8f3 does not exist
Dec  3 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:00:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:00:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:00:51 compute-0 podman[423233]: 2025-12-03 02:00:51.532859396 +0000 UTC m=+0.109989349 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec  3 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:00:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.300637941 +0000 UTC m=+0.074505520 container create 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:00:52 compute-0 systemd[1]: Started libpod-conmon-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope.
Dec  3 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.271162311 +0000 UTC m=+0.045029900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:00:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.432782013 +0000 UTC m=+0.206649642 container init 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 02:00:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.449246136 +0000 UTC m=+0.223113745 container start 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:00:52 compute-0 elated_williamson[423378]: 167 167
Dec  3 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.461914343 +0000 UTC m=+0.235781962 container attach 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:00:52 compute-0 systemd[1]: libpod-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope: Deactivated successfully.
Dec  3 02:00:52 compute-0 conmon[423378]: conmon 3e91d013313159965897 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope/container/memory.events
Dec  3 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.465115393 +0000 UTC m=+0.238983002 container died 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:00:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-21033c6d95b47a1411bd8769eeb6bf58687fa501e53163d038bcc601d8d527fd-merged.mount: Deactivated successfully.
Dec  3 02:00:52 compute-0 podman[423362]: 2025-12-03 02:00:52.542662357 +0000 UTC m=+0.316529946 container remove 3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:00:52 compute-0 systemd[1]: libpod-conmon-3e91d013313159965897c4fe38a3e0df4061ec72356df665672cc4c94125eded.scope: Deactivated successfully.
Dec  3 02:00:52 compute-0 podman[423403]: 2025-12-03 02:00:52.817672073 +0000 UTC m=+0.097522338 container create 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:00:52 compute-0 podman[423403]: 2025-12-03 02:00:52.775152805 +0000 UTC m=+0.055003120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:00:52 compute-0 systemd[1]: Started libpod-conmon-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope.
Dec  3 02:00:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:52 compute-0 nova_compute[351485]: 2025-12-03 02:00:52.954 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:52 compute-0 podman[423403]: 2025-12-03 02:00:52.982083634 +0000 UTC m=+0.261933939 container init 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:00:53 compute-0 podman[423403]: 2025-12-03 02:00:52.999193976 +0000 UTC m=+0.279044241 container start 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:00:53 compute-0 podman[423403]: 2025-12-03 02:00:53.007650294 +0000 UTC m=+0.287500589 container attach 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:00:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:53 compute-0 nova_compute[351485]: 2025-12-03 02:00:53.185 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:54 compute-0 determined_elbakyan[423417]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:00:54 compute-0 determined_elbakyan[423417]: --> relative data size: 1.0
Dec  3 02:00:54 compute-0 determined_elbakyan[423417]: --> All data devices are unavailable
Dec  3 02:00:54 compute-0 systemd[1]: libpod-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope: Deactivated successfully.
Dec  3 02:00:54 compute-0 systemd[1]: libpod-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope: Consumed 1.147s CPU time.
Dec  3 02:00:54 compute-0 podman[423446]: 2025-12-03 02:00:54.302202734 +0000 UTC m=+0.045464641 container died 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 02:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fda6abc460e83f5b8443325d1bcff96c82c89d066182df3d4bd672f898bdbcb1-merged.mount: Deactivated successfully.
Dec  3 02:00:54 compute-0 podman[423446]: 2025-12-03 02:00:54.427646877 +0000 UTC m=+0.170908714 container remove 046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:00:54 compute-0 systemd[1]: libpod-conmon-046fa72c31e171fff5fceec239aa82f90effb817b8a135547514ea2e878adb4f.scope: Deactivated successfully.
Dec  3 02:00:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:55 compute-0 podman[423535]: 2025-12-03 02:00:55.094126288 +0000 UTC m=+0.110684518 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Dec  3 02:00:55 compute-0 podman[423534]: 2025-12-03 02:00:55.111083286 +0000 UTC m=+0.134473499 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:00:55 compute-0 podman[423533]: 2025-12-03 02:00:55.124419801 +0000 UTC m=+0.143321087 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350)
Dec  3 02:00:55 compute-0 podman[423542]: 2025-12-03 02:00:55.147883042 +0000 UTC m=+0.135964160 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:00:55 compute-0 ovn_controller[89134]: 2025-12-03T02:00:55Z|00044|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  3 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.505194236 +0000 UTC m=+0.068747317 container create ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 02:00:55 compute-0 systemd[1]: Started libpod-conmon-ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca.scope.
Dec  3 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.475850759 +0000 UTC m=+0.039403870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:00:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.643636745 +0000 UTC m=+0.207189826 container init ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.659754119 +0000 UTC m=+0.223307180 container start ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.664130592 +0000 UTC m=+0.227683653 container attach ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:00:55 compute-0 quizzical_benz[423692]: 167 167
Dec  3 02:00:55 compute-0 systemd[1]: libpod-ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca.scope: Deactivated successfully.
Dec  3 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.674786232 +0000 UTC m=+0.238339293 container died ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e56ae81b63d702e2bfdc9f4ace6fc9a664036854d36992ce54e9e8eb36a355-merged.mount: Deactivated successfully.
Dec  3 02:00:55 compute-0 podman[423676]: 2025-12-03 02:00:55.74392968 +0000 UTC m=+0.307482781 container remove ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:00:55 compute-0 systemd[1]: libpod-conmon-ba38fc59ecd0f1cf403406375cf0c81d5720b264d3a30fa5d00352c40af2b8ca.scope: Deactivated successfully.
Dec  3 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.036369746 +0000 UTC m=+0.083245375 container create abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.006121705 +0000 UTC m=+0.052997374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:00:56 compute-0 systemd[1]: Started libpod-conmon-abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9.scope.
Dec  3 02:00:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.2253756 +0000 UTC m=+0.272251269 container init abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.238655274 +0000 UTC m=+0.285530933 container start abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec  3 02:00:56 compute-0 podman[423714]: 2025-12-03 02:00:56.246449153 +0000 UTC m=+0.293324822 container attach abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:00:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:56 compute-0 zealous_bose[423729]: {
Dec  3 02:00:56 compute-0 zealous_bose[423729]:    "0": [
Dec  3 02:00:56 compute-0 zealous_bose[423729]:        {
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "devices": [
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "/dev/loop3"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            ],
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_name": "ceph_lv0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_size": "21470642176",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "name": "ceph_lv0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "tags": {
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cluster_name": "ceph",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.crush_device_class": "",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.encrypted": "0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osd_id": "0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.type": "block",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.vdo": "0"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            },
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "type": "block",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "vg_name": "ceph_vg0"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:        }
Dec  3 02:00:57 compute-0 zealous_bose[423729]:    ],
Dec  3 02:00:57 compute-0 zealous_bose[423729]:    "1": [
Dec  3 02:00:57 compute-0 zealous_bose[423729]:        {
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "devices": [
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "/dev/loop4"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            ],
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_name": "ceph_lv1",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_size": "21470642176",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "name": "ceph_lv1",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "tags": {
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cluster_name": "ceph",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.crush_device_class": "",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.encrypted": "0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osd_id": "1",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.type": "block",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.vdo": "0"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            },
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "type": "block",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "vg_name": "ceph_vg1"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:        }
Dec  3 02:00:57 compute-0 zealous_bose[423729]:    ],
Dec  3 02:00:57 compute-0 zealous_bose[423729]:    "2": [
Dec  3 02:00:57 compute-0 zealous_bose[423729]:        {
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "devices": [
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "/dev/loop5"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            ],
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_name": "ceph_lv2",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_size": "21470642176",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "name": "ceph_lv2",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "tags": {
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.cluster_name": "ceph",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.crush_device_class": "",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.encrypted": "0",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osd_id": "2",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.type": "block",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:                "ceph.vdo": "0"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            },
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "type": "block",
Dec  3 02:00:57 compute-0 zealous_bose[423729]:            "vg_name": "ceph_vg2"
Dec  3 02:00:57 compute-0 zealous_bose[423729]:        }
Dec  3 02:00:57 compute-0 zealous_bose[423729]:    ]
Dec  3 02:00:57 compute-0 zealous_bose[423729]: }
Dec  3 02:00:57 compute-0 systemd[1]: libpod-abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9.scope: Deactivated successfully.
Dec  3 02:00:57 compute-0 podman[423714]: 2025-12-03 02:00:57.051043736 +0000 UTC m=+1.097919405 container died abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-068fb00d96d5b98bbba1fe45195ff1b08ada21b81dc3ac8029eb51c194555d2a-merged.mount: Deactivated successfully.
Dec  3 02:00:57 compute-0 podman[423714]: 2025-12-03 02:00:57.17971964 +0000 UTC m=+1.226595259 container remove abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 02:00:57 compute-0 systemd[1]: libpod-conmon-abc5726f81a64ffe0e66a7345a5ed71a5e4fc4823be96e75ea98b754c44163e9.scope: Deactivated successfully.
Dec  3 02:00:57 compute-0 nova_compute[351485]: 2025-12-03 02:00:57.957 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:00:58 compute-0 nova_compute[351485]: 2025-12-03 02:00:58.188 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.282216311 +0000 UTC m=+0.095898432 container create 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.248029858 +0000 UTC m=+0.061711969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:00:58 compute-0 systemd[1]: Started libpod-conmon-703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26.scope.
Dec  3 02:00:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:00:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.439180942 +0000 UTC m=+0.252863043 container init 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 02:00:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.457505038 +0000 UTC m=+0.271187119 container start 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:00:58 compute-0 kind_mirzakhani[423900]: 167 167
Dec  3 02:00:58 compute-0 systemd[1]: libpod-703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26.scope: Deactivated successfully.
Dec  3 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.47036206 +0000 UTC m=+0.284044171 container attach 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.471720168 +0000 UTC m=+0.285402249 container died 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:00:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c568a41fdb784a20c136e173697e8879a1409cc538b648ebf70b5eddaea5beff-merged.mount: Deactivated successfully.
Dec  3 02:00:58 compute-0 podman[423885]: 2025-12-03 02:00:58.534178417 +0000 UTC m=+0.347860498 container remove 703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:00:58 compute-0 systemd[1]: libpod-conmon-703d15f0a68a7c5f926de8d6cca21cb5f8acd483b55cceafbbac283955cbfb26.scope: Deactivated successfully.
Dec  3 02:00:58 compute-0 podman[423922]: 2025-12-03 02:00:58.845249758 +0000 UTC m=+0.087841605 container create ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 02:00:58 compute-0 podman[423922]: 2025-12-03 02:00:58.816277593 +0000 UTC m=+0.058869470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:00:58 compute-0 systemd[1]: Started libpod-conmon-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope.
Dec  3 02:00:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:00:59 compute-0 podman[423922]: 2025-12-03 02:00:59.001265533 +0000 UTC m=+0.243857380 container init ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:00:59 compute-0 podman[423922]: 2025-12-03 02:00:59.018682063 +0000 UTC m=+0.261273900 container start ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 02:00:59 compute-0 podman[423922]: 2025-12-03 02:00:59.025979039 +0000 UTC m=+0.268570906 container attach ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 02:00:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:59.627 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:00:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:59.630 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:00:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:00:59.632 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:00:59 compute-0 podman[158098]: time="2025-12-03T02:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:00:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45381 "" "Go-http-client/1.1"
Dec  3 02:00:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9055 "" "Go-http-client/1.1"
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]: {
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "osd_id": 2,
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "type": "bluestore"
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:    },
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "osd_id": 1,
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "type": "bluestore"
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:    },
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "osd_id": 0,
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:        "type": "bluestore"
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]:    }
Dec  3 02:01:00 compute-0 happy_bhaskara[423937]: }
Dec  3 02:01:00 compute-0 systemd[1]: libpod-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope: Deactivated successfully.
Dec  3 02:01:00 compute-0 podman[423922]: 2025-12-03 02:01:00.251905157 +0000 UTC m=+1.494497024 container died ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:01:00 compute-0 systemd[1]: libpod-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope: Consumed 1.183s CPU time.
Dec  3 02:01:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae4402f3a81bae4657d6dc551dd97980ae6ecaf32c8d9ac0b7cefae086cf64d3-merged.mount: Deactivated successfully.
Dec  3 02:01:00 compute-0 podman[423922]: 2025-12-03 02:01:00.333944298 +0000 UTC m=+1.576536125 container remove ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:01:00 compute-0 systemd[1]: libpod-conmon-ffe2a87bbe58ec5b5ea38bda1c693cdb3505e41295b0269fb95000f4ad3a0883.scope: Deactivated successfully.
Dec  3 02:01:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:01:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:01:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:01:00 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:01:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e20eea0b-febf-4877-ab95-5e41b6416529 does not exist
Dec  3 02:01:00 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8691eab9-01c4-49ad-b43e-88ba074b5e21 does not exist
Dec  3 02:01:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 op/s
Dec  3 02:01:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:01:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:01:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:01:01 compute-0 openstack_network_exporter[368278]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:01:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:01:01 compute-0 nova_compute[351485]: 2025-12-03 02:01:01.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:01:01 compute-0 ovn_controller[89134]: 2025-12-03T02:01:01Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:1b:b0 192.168.0.227
Dec  3 02:01:01 compute-0 ovn_controller[89134]: 2025-12-03T02:01:01Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:1b:b0 192.168.0.227
Dec  3 02:01:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:01:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455105108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.141 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.285 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.286 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.287 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.294 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.295 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.295 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.301 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.302 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.303 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:01:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 182 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 926 KiB/s wr, 11 op/s
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.785 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.787 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3473MB free_disk=59.9058723449707GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.787 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.787 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.899 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.899 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.899 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.900 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.900 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.960 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:02 compute-0 nova_compute[351485]: 2025-12-03 02:01:02.996 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:01:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.191 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:01:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481947854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.511 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.519 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.535 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.563 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:01:03 compute-0 nova_compute[351485]: 2025-12-03 02:01:03.563 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:01:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 187 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Dec  3 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.563 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.564 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.565 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.861 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.861 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.861 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:01:04 compute-0 nova_compute[351485]: 2025-12-03 02:01:04.862 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:01:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  3 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.464 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.484 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.485 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.485 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.486 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:06 compute-0 nova_compute[351485]: 2025-12-03 02:01:06.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:07 compute-0 nova_compute[351485]: 2025-12-03 02:01:07.965 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:08 compute-0 nova_compute[351485]: 2025-12-03 02:01:08.194 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  3 02:01:08 compute-0 nova_compute[351485]: 2025-12-03 02:01:08.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:08 compute-0 nova_compute[351485]: 2025-12-03 02:01:08.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  3 02:01:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  3 02:01:12 compute-0 nova_compute[351485]: 2025-12-03 02:01:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:01:12 compute-0 nova_compute[351485]: 2025-12-03 02:01:12.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:01:12 compute-0 nova_compute[351485]: 2025-12-03 02:01:12.969 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:13 compute-0 nova_compute[351485]: 2025-12-03 02:01:13.197 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:13 compute-0 podman[424091]: 2025-12-03 02:01:13.873014171 +0000 UTC m=+0.112088088 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:01:13 compute-0 podman[424089]: 2025-12-03 02:01:13.885865673 +0000 UTC m=+0.122806700 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 02:01:13 compute-0 podman[424090]: 2025-12-03 02:01:13.901109582 +0000 UTC m=+0.136442384 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 02:01:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 594 KiB/s wr, 47 op/s
Dec  3 02:01:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 439 KiB/s wr, 38 op/s
Dec  3 02:01:17 compute-0 nova_compute[351485]: 2025-12-03 02:01:17.973 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:18 compute-0 nova_compute[351485]: 2025-12-03 02:01:18.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  3 02:01:18 compute-0 podman[424146]: 2025-12-03 02:01:18.875202066 +0000 UTC m=+0.126428622 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.505 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.505 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.506 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.509 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.517 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.522 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 02:01:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:19.523 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 02:01:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 8.7 KiB/s wr, 2 op/s
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.540 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 03 Dec 2025 02:01:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a52c8bb6-5014-4956-b2e0-eabcad9f47d2 x-openstack-request-id: req-a52c8bb6-5014-4956-b2e0-eabcad9f47d2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.541 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274", "name": "vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {"metering.server_group": "0f6ab671-23df-4a6d-9613-02f9fb5fb294"}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T02:00:14Z", "updated": "2025-12-03T02:00:26Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.227", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:1b:b0"}, {"version": 4, "addr": "192.168.122.186", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:de:1b:b0"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:00:26.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.541 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 used request id req-a52c8bb6-5014-4956-b2e0-eabcad9f47d2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.543 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.548 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.549 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.549 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:01:20.550070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.605 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.645 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 49.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.676 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:01:20.678804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.685 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.691 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 / tapd0c565d0-52 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.691 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.697 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.698 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:01:20.699501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.700 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.701 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.701 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.702 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.703 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.704 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.704 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:01:20.703338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.724 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:01:20.724409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.726 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.727 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.728 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.729 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:01:20.728372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:01:20.730956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.777 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.778 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.779 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.810 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.811 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.812 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.860 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.860 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.861 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.864 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.864 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>]
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:01:20.863835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:01:20.866164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.980 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.981 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:20.981 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.066 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.066 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.067 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.166 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.167 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.168 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.169 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.169 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.171 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:01:21.170673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.172 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.173 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 1962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.175 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.176 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:01:21.175510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.177 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.177 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.179 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.180 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.180 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.182 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:01:21.182670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.183 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.184 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.184 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.185 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.185 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.187 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.189 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.190 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.190 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:01:21.189174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.193 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.193 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.194 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.194 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:01:21.192919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.195 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.196 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.196 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.197 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.197 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:01:21.199664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.200 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.200 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.201 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.201 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.201 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.202 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.202 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.203 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.203 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:01:21.205761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.206 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6964190045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.206 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.207 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.207 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5318095604 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.208 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.209 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:01:21.210264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.211 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.212 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.212 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.212 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.213 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.214 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.214 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:01:21.213672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:01:21.215733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 273860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 33970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 38600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.218 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 4896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:01:21.217508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:01:21.218787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.220 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.221 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.221 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.221 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:01:21.220494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.224 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:01:21.224071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:01:21.225708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:01:21.227077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.227 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:01:21.228918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j>]
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.233 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:01:21.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:01:21 compute-0 podman[424169]: 2025-12-03 02:01:21.911594925 +0000 UTC m=+0.158635029 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, version=9.4, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, release=1214.1726694543)
Dec  3 02:01:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 8.0 KiB/s wr, 6 op/s
Dec  3 02:01:22 compute-0 nova_compute[351485]: 2025-12-03 02:01:22.975 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:23 compute-0 nova_compute[351485]: 2025-12-03 02:01:23.202 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec  3 02:01:26 compute-0 podman[424188]: 2025-12-03 02:01:26.096215163 +0000 UTC m=+0.350591715 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  3 02:01:26 compute-0 podman[424190]: 2025-12-03 02:01:26.119032216 +0000 UTC m=+0.353688723 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:01:26 compute-0 podman[424191]: 2025-12-03 02:01:26.119311864 +0000 UTC m=+0.348296671 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  3 02:01:26 compute-0 podman[424189]: 2025-12-03 02:01:26.1514776 +0000 UTC m=+0.389381938 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 02:01:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:01:27 compute-0 nova_compute[351485]: 2025-12-03 02:01:27.980 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:28 compute-0 nova_compute[351485]: 2025-12-03 02:01:28.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:01:28
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control']
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:01:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:01:29 compute-0 podman[158098]: time="2025-12-03T02:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:01:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:01:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8640 "" "Go-http-client/1.1"
Dec  3 02:01:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:01:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:01:31 compute-0 openstack_network_exporter[368278]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:01:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:01:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec  3 02:01:32 compute-0 nova_compute[351485]: 2025-12-03 02:01:32.983 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:33 compute-0 nova_compute[351485]: 2025-12-03 02:01:33.209 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec  3 02:01:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.7 KiB/s wr, 35 op/s
Dec  3 02:01:37 compute-0 nova_compute[351485]: 2025-12-03 02:01:37.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:38 compute-0 nova_compute[351485]: 2025-12-03 02:01:38.213 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016578097528814222 of space, bias 1.0, pg target 0.4973429258644267 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:01:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:01:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 02:01:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 02:01:42 compute-0 nova_compute[351485]: 2025-12-03 02:01:42.994 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:43 compute-0 nova_compute[351485]: 2025-12-03 02:01:43.217 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 02:01:44 compute-0 podman[424270]: 2025-12-03 02:01:44.814960271 +0000 UTC m=+0.100032319 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:01:44 compute-0 podman[424272]: 2025-12-03 02:01:44.829443599 +0000 UTC m=+0.089331797 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:01:44 compute-0 podman[424271]: 2025-12-03 02:01:44.84440146 +0000 UTC m=+0.116480692 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 02:01:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  3 02:01:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:01:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018134638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:01:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:01:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018134638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:01:47 compute-0 nova_compute[351485]: 2025-12-03 02:01:47.996 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:48 compute-0 nova_compute[351485]: 2025-12-03 02:01:48.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:01:49 compute-0 podman[424326]: 2025-12-03 02:01:49.887891149 +0000 UTC m=+0.139924382 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:01:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:01:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:01:52 compute-0 podman[424345]: 2025-12-03 02:01:52.845479088 +0000 UTC m=+0.098532176 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:01:52 compute-0 nova_compute[351485]: 2025-12-03 02:01:52.996 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:53 compute-0 nova_compute[351485]: 2025-12-03 02:01:53.224 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:01:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:01:56 compute-0 podman[424368]: 2025-12-03 02:01:56.877357175 +0000 UTC m=+0.113495488 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  3 02:01:56 compute-0 podman[424367]: 2025-12-03 02:01:56.883447266 +0000 UTC m=+0.117634194 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:01:56 compute-0 podman[424366]: 2025-12-03 02:01:56.90666642 +0000 UTC m=+0.149472971 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350)
Dec  3 02:01:56 compute-0 podman[424365]: 2025-12-03 02:01:56.917983919 +0000 UTC m=+0.162970371 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:01:58 compute-0 nova_compute[351485]: 2025-12-03 02:01:58.000 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:01:58 compute-0 nova_compute[351485]: 2025-12-03 02:01:58.227 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:01:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:01:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:01:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:01:59.629 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:01:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:01:59.630 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:01:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:01:59.631 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:01:59 compute-0 podman[158098]: time="2025-12-03T02:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:01:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:01:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 02:02:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:02:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:02:01 compute-0 openstack_network_exporter[368278]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:02:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:02:01 compute-0 nova_compute[351485]: 2025-12-03 02:02:01.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:02:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:02:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:02 compute-0 nova_compute[351485]: 2025-12-03 02:02:02.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fdd795ba-a08d-42a7-b758-04a3e4b61687 does not exist
Dec  3 02:02:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5aaff57f-d11f-4a6a-81e4-86c424f22628 does not exist
Dec  3 02:02:02 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 86c89704-b372-434c-8c65-acf4ed3a762b does not exist
Dec  3 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:02:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:02:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.004 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.231 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:03 compute-0 nova_compute[351485]: 2025-12-03 02:02:03.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:02:03 compute-0 podman[424831]: 2025-12-03 02:02:03.987133719 +0000 UTC m=+0.095634095 container create f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:03.939739354 +0000 UTC m=+0.048239810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:02:04 compute-0 systemd[1]: Started libpod-conmon-f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f.scope.
Dec  3 02:02:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.138001578 +0000 UTC m=+0.246501974 container init f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.15085827 +0000 UTC m=+0.259358636 container start f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.156612042 +0000 UTC m=+0.265112448 container attach f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 02:02:04 compute-0 systemd[1]: libpod-f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f.scope: Deactivated successfully.
Dec  3 02:02:04 compute-0 xenodochial_bohr[424847]: 167 167
Dec  3 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.163765484 +0000 UTC m=+0.272265870 container died f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8f7d770ef0485f1e428e1759f982fc95b4acb23f40cfa5d0ff5fbfe029a74cf-merged.mount: Deactivated successfully.
Dec  3 02:02:04 compute-0 podman[424831]: 2025-12-03 02:02:04.226760538 +0000 UTC m=+0.335260904 container remove f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 02:02:04 compute-0 systemd[1]: libpod-conmon-f32efae010aea2908986c68d6c2b0850d082c2d16bb2f164b785ce208a8a518f.scope: Deactivated successfully.
Dec  3 02:02:04 compute-0 nova_compute[351485]: 2025-12-03 02:02:04.373 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:02:04 compute-0 nova_compute[351485]: 2025-12-03 02:02:04.375 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:02:04 compute-0 nova_compute[351485]: 2025-12-03 02:02:04.375 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.467306613 +0000 UTC m=+0.065093384 container create a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:02:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.440820237 +0000 UTC m=+0.038607048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:02:04 compute-0 systemd[1]: Started libpod-conmon-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope.
Dec  3 02:02:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.622927356 +0000 UTC m=+0.220714217 container init a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.652079697 +0000 UTC m=+0.249866508 container start a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:02:04 compute-0 podman[424869]: 2025-12-03 02:02:04.66071625 +0000 UTC m=+0.258503111 container attach a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 02:02:05 compute-0 serene_gates[424885]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:02:05 compute-0 serene_gates[424885]: --> relative data size: 1.0
Dec  3 02:02:05 compute-0 serene_gates[424885]: --> All data devices are unavailable
Dec  3 02:02:05 compute-0 systemd[1]: libpod-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope: Deactivated successfully.
Dec  3 02:02:05 compute-0 systemd[1]: libpod-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope: Consumed 1.193s CPU time.
Dec  3 02:02:05 compute-0 podman[424914]: 2025-12-03 02:02:05.963839922 +0000 UTC m=+0.043005963 container died a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-75a1cc7383ecf04f90ebf873f1475a4fd11de108fff2d6d80507a45d1c08c08a-merged.mount: Deactivated successfully.
Dec  3 02:02:06 compute-0 podman[424914]: 2025-12-03 02:02:06.090489849 +0000 UTC m=+0.169655850 container remove a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:02:06 compute-0 systemd[1]: libpod-conmon-a75bbc2e0141c67fd8ccd02adf79a03e4e6a2a9e83722bdaa0e29071431038c8.scope: Deactivated successfully.
Dec  3 02:02:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.745 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.759 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.760 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.761 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.762 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.762 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.788 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.788 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.789 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.789 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:02:06 compute-0 nova_compute[351485]: 2025-12-03 02:02:06.790 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:02:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401004877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.270213496 +0000 UTC m=+0.089151472 container create 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.269 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.239442589 +0000 UTC m=+0.058380595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:02:07 compute-0 systemd[1]: Started libpod-conmon-3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4.scope.
Dec  3 02:02:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.408 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.410 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.411 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.413967545 +0000 UTC m=+0.232905561 container init 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.426890339 +0000 UTC m=+0.245828315 container start 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.435 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.435917724 +0000 UTC m=+0.254855790 container attach 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.436 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.437 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 agitated_darwin[425106]: 167 167
Dec  3 02:02:07 compute-0 systemd[1]: libpod-3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4.scope: Deactivated successfully.
Dec  3 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.444959708 +0000 UTC m=+0.263897714 container died 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.466 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.467 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 nova_compute[351485]: 2025-12-03 02:02:07.468 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:02:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8b43487bf7d9488bffd1b938190402287caed2c356e9e9b4e4353c744f0205c-merged.mount: Deactivated successfully.
Dec  3 02:02:07 compute-0 podman[425088]: 2025-12-03 02:02:07.518459658 +0000 UTC m=+0.337397634 container remove 3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:02:07 compute-0 systemd[1]: libpod-conmon-3664e140ce1e926a621a8fae0d0fa6a3902ee7580f74422a5ad46b40afb868c4.scope: Deactivated successfully.
Dec  3 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.794127382 +0000 UTC m=+0.073309825 container create 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.767031699 +0000 UTC m=+0.046214142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:02:07 compute-0 systemd[1]: Started libpod-conmon-587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf.scope.
Dec  3 02:02:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.914969916 +0000 UTC m=+0.194152369 container init 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.931818461 +0000 UTC m=+0.211000894 container start 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:02:07 compute-0 podman[425130]: 2025-12-03 02:02:07.942191173 +0000 UTC m=+0.221373606 container attach 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.003 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.027 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.028 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3348MB free_disk=59.888832092285156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.028 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.029 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.236 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.335 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.336 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.338 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.339 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.340 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:02:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:08 compute-0 nova_compute[351485]: 2025-12-03 02:02:08.616 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:08 compute-0 awesome_hermann[425147]: {
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:    "0": [
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:        {
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "devices": [
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "/dev/loop3"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            ],
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_name": "ceph_lv0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_size": "21470642176",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "name": "ceph_lv0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "tags": {
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cluster_name": "ceph",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.crush_device_class": "",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.encrypted": "0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osd_id": "0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.type": "block",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.vdo": "0"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            },
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "type": "block",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "vg_name": "ceph_vg0"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:        }
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:    ],
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:    "1": [
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:        {
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "devices": [
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "/dev/loop4"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            ],
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_name": "ceph_lv1",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_size": "21470642176",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "name": "ceph_lv1",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "tags": {
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cluster_name": "ceph",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.crush_device_class": "",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.encrypted": "0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osd_id": "1",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.type": "block",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.vdo": "0"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            },
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "type": "block",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "vg_name": "ceph_vg1"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:        }
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:    ],
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:    "2": [
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:        {
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "devices": [
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "/dev/loop5"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            ],
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_name": "ceph_lv2",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_size": "21470642176",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "name": "ceph_lv2",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "tags": {
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.cluster_name": "ceph",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.crush_device_class": "",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.encrypted": "0",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osd_id": "2",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.type": "block",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:                "ceph.vdo": "0"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            },
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "type": "block",
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:            "vg_name": "ceph_vg2"
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:        }
Dec  3 02:02:08 compute-0 awesome_hermann[425147]:    ]
Dec  3 02:02:08 compute-0 awesome_hermann[425147]: }
Dec  3 02:02:08 compute-0 systemd[1]: libpod-587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf.scope: Deactivated successfully.
Dec  3 02:02:08 compute-0 podman[425130]: 2025-12-03 02:02:08.759797381 +0000 UTC m=+1.038979824 container died 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc7d5b0cf706537f946f339550f40fb651f09e8b6c7e78e651cf00f6a5c43787-merged.mount: Deactivated successfully.
Dec  3 02:02:08 compute-0 podman[425130]: 2025-12-03 02:02:08.850152836 +0000 UTC m=+1.129335279 container remove 587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hermann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 02:02:08 compute-0 systemd[1]: libpod-conmon-587ece0563076530dbba73c2667f4a1519c5bc14718d7667237f810e9f3bfecf.scope: Deactivated successfully.
Dec  3 02:02:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:02:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446100229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.178 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.196 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.220 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.225 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.226 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.228 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.228 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:02:09 compute-0 nova_compute[351485]: 2025-12-03 02:02:09.253 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:02:10 compute-0 nova_compute[351485]: 2025-12-03 02:02:10.070 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:10 compute-0 nova_compute[351485]: 2025-12-03 02:02:10.071 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.094929414 +0000 UTC m=+0.096124089 container create 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.058676363 +0000 UTC m=+0.059871108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:02:10 compute-0 systemd[1]: Started libpod-conmon-3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c.scope.
Dec  3 02:02:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.223758552 +0000 UTC m=+0.224953267 container init 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.237489059 +0000 UTC m=+0.238683734 container start 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.242995634 +0000 UTC m=+0.244190349 container attach 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:02:10 compute-0 gallant_wilbur[425343]: 167 167
Dec  3 02:02:10 compute-0 systemd[1]: libpod-3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c.scope: Deactivated successfully.
Dec  3 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.25102274 +0000 UTC m=+0.252217415 container died 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 02:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1c46f2db41cb440675ace8933337a3502442fd030704a5c4274faa6d410eb6a-merged.mount: Deactivated successfully.
Dec  3 02:02:10 compute-0 podman[425327]: 2025-12-03 02:02:10.312894823 +0000 UTC m=+0.314089488 container remove 3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:02:10 compute-0 systemd[1]: libpod-conmon-3417ef324aa4ee83f5a3942d40dcaa175cf684e7d3efdb06c0c8465c04630d3c.scope: Deactivated successfully.
Dec  3 02:02:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:10 compute-0 nova_compute[351485]: 2025-12-03 02:02:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.583300919 +0000 UTC m=+0.087244629 container create 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.546666477 +0000 UTC m=+0.050610247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:02:10 compute-0 systemd[1]: Started libpod-conmon-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope.
Dec  3 02:02:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.751746203 +0000 UTC m=+0.255689903 container init 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.765448839 +0000 UTC m=+0.269392509 container start 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:02:10 compute-0 podman[425366]: 2025-12-03 02:02:10.770336026 +0000 UTC m=+0.274279736 container attach 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]: {
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "osd_id": 2,
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "type": "bluestore"
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:    },
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "osd_id": 1,
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "type": "bluestore"
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:    },
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "osd_id": 0,
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:        "type": "bluestore"
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]:    }
Dec  3 02:02:11 compute-0 compassionate_northcutt[425381]: }
Dec  3 02:02:11 compute-0 systemd[1]: libpod-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope: Deactivated successfully.
Dec  3 02:02:11 compute-0 podman[425366]: 2025-12-03 02:02:11.804300108 +0000 UTC m=+1.308243788 container died 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:02:11 compute-0 systemd[1]: libpod-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope: Consumed 1.022s CPU time.
Dec  3 02:02:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5f02b837f994d867dd4a75fb7c75e8d29606adaf3bf38f89667fea3dc4ed18e-merged.mount: Deactivated successfully.
Dec  3 02:02:11 compute-0 podman[425366]: 2025-12-03 02:02:11.880439752 +0000 UTC m=+1.384383432 container remove 1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:02:11 compute-0 systemd[1]: libpod-conmon-1bdb66700bd919b26b04285e4ec35402f92ae9d67307adeb2706dc2f4fc86947.scope: Deactivated successfully.
Dec  3 02:02:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:02:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:02:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7bbd2736-8d9e-4283-826d-ceb4174cbb83 does not exist
Dec  3 02:02:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f689d914-fb4e-4ea7-98c4-9949e7684ed7 does not exist
Dec  3 02:02:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.006 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.239 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:13 compute-0 nova_compute[351485]: 2025-12-03 02:02:13.579 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:02:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:15 compute-0 podman[425480]: 2025-12-03 02:02:15.898059857 +0000 UTC m=+0.134590491 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:02:15 compute-0 podman[425478]: 2025-12-03 02:02:15.902689198 +0000 UTC m=+0.138295276 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:02:15 compute-0 podman[425479]: 2025-12-03 02:02:15.910898609 +0000 UTC m=+0.146185628 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Dec  3 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.481 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.482 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:02:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:16 compute-0 nova_compute[351485]: 2025-12-03 02:02:16.485 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.488 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:02:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:16.489 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:02:16 compute-0 nova_compute[351485]: 2025-12-03 02:02:16.492 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:18 compute-0 nova_compute[351485]: 2025-12-03 02:02:18.009 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:18 compute-0 nova_compute[351485]: 2025-12-03 02:02:18.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:20 compute-0 nova_compute[351485]: 2025-12-03 02:02:20.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:20 compute-0 nova_compute[351485]: 2025-12-03 02:02:20.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:02:20 compute-0 nova_compute[351485]: 2025-12-03 02:02:20.613 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:02:20 compute-0 podman[425543]: 2025-12-03 02:02:20.890477737 +0000 UTC m=+0.153172125 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:02:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.609 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.611 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.662 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.781 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.782 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.796 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.798 351492 INFO nova.compute.claims [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:02:22 compute-0 nova_compute[351485]: 2025-12-03 02:02:22.977 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.011 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.247 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:02:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588833365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.504 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.518 351492 DEBUG nova.compute.provider_tree [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.537 351492 DEBUG nova.scheduler.client.report [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.575 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.576 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.632 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.634 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.664 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.712 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.812 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.814 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.815 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Creating image(s)#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.865 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:23 compute-0 podman[425585]: 2025-12-03 02:02:23.908363064 +0000 UTC m=+0.166582592 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.919 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.982 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:23 compute-0 nova_compute[351485]: 2025-12-03 02:02:23.992 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.060 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.061 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.062 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.063 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b9e804eb90834f1320f9fd6c25a03e15d4052aa8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.112 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.133 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 201 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.531 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.673 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.871 351492 DEBUG nova.objects.instance [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.920 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.971 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:24 compute-0 nova_compute[351485]: 2025-12-03 02:02:24.979 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.044 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.046 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.047 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.047 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.087 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.100 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.126 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Successfully updated port: 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.149 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.150 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.150 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:02:25 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.242 351492 DEBUG nova.compute.manager [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.243 351492 DEBUG nova.compute.manager [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing instance network info cache due to event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.243 351492 DEBUG oslo_concurrency.lockutils [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.292 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:02:25 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:25.492 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:25 compute-0 nova_compute[351485]: 2025-12-03 02:02:25.887 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.787s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.164 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.165 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Ensure instance console log exists: /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.168 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.168 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:26 compute-0 nova_compute[351485]: 2025-12-03 02:02:26.168 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:26.486 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 228 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.168 351492 DEBUG nova.network.neutron [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.205 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.206 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance network_info: |[{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.206 351492 DEBUG oslo_concurrency.lockutils [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.207 351492 DEBUG nova.network.neutron [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.213 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start _get_guest_xml network_info=[{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.226 351492 WARNING nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.239 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.240 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.249 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.249 351492 DEBUG nova.virt.libvirt.host [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.250 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.250 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T01:53:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bc665ec6-3672-4e52-a447-5267b04e227a',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T01:53:18Z,direct_url=<?>,disk_format='qcow2',id=466cf0db-c3be-4d70-b9f3-08c056c2cad9,min_disk=0,min_ram=0,name='cirros',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T01:53:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.251 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.251 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.251 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.252 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.253 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.253 351492 DEBUG nova.virt.hardware [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.256 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:02:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200000264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.848 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:27 compute-0 nova_compute[351485]: 2025-12-03 02:02:27.850 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:27 compute-0 podman[425922]: 2025-12-03 02:02:27.860776933 +0000 UTC m=+0.113711344 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 02:02:27 compute-0 podman[425923]: 2025-12-03 02:02:27.870872287 +0000 UTC m=+0.109263718 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:02:27 compute-0 podman[425924]: 2025-12-03 02:02:27.87949865 +0000 UTC m=+0.124742404 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:02:27 compute-0 podman[425921]: 2025-12-03 02:02:27.888329969 +0000 UTC m=+0.147933238 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.012 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.255 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:02:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334003611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.348 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:02:28
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups']
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.407 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.420 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 228 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:02:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:02:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604432881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:02:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.989 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.991 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',id=4,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-54gvmjwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:02:23Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  3 02:02:28 compute-0 nova_compute[351485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=b43e79bd-550f-42f8-9aa7-980b6bca3f70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.991 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.992 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:02:28 compute-0 nova_compute[351485]: 2025-12-03 02:02:28.994 351492 DEBUG nova.objects.instance [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.011 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <uuid>b43e79bd-550f-42f8-9aa7-980b6bca3f70</uuid>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <name>instance-00000004</name>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <memory>524288</memory>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <nova:name>vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw</nova:name>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:02:27</nova:creationTime>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <nova:flavor name="m1.small">
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:memory>512</nova:memory>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="466cf0db-c3be-4d70-b9f3-08c056c2cad9"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <nova:port uuid="6b217cd3-164a-4fb4-8eb6-f1eb3c806963">
Dec  3 02:02:29 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="192.168.0.85" ipVersion="4"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <system>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <entry name="serial">b43e79bd-550f-42f8-9aa7-980b6bca3f70</entry>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <entry name="uuid">b43e79bd-550f-42f8-9aa7-980b6bca3f70</entry>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </system>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <os>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  </os>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <features>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  </features>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk">
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </source>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.eph0">
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </source>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <target dev="vdb" bus="virtio"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config">
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </source>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:02:29 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:da:35:ef"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <target dev="tap6b217cd3-16"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/console.log" append="off"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <video>
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </video>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:02:29 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:02:29 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:02:29 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:02:29 compute-0 nova_compute[351485]: </domain>
Dec  3 02:02:29 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.012 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Preparing to wait for external event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.012 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.012 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.013 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.014 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',id=4,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-54gvmjwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:02:23Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  3 02:02:29 compute-0 nova_compute[351485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=b43e79bd-550f-42f8-9aa7-980b6bca3f70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.014 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.015 351492 DEBUG nova.network.os_vif_util [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.016 351492 DEBUG os_vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.016 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.018 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.018 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.024 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.025 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b217cd3-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.026 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b217cd3-16, col_values=(('external_ids', {'iface-id': '6b217cd3-164a-4fb4-8eb6-f1eb3c806963', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:35:ef', 'vm-uuid': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.029 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:29 compute-0 NetworkManager[48912]: <info>  [1764727349.0310] manager: (tap6b217cd3-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.037 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.040 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.041 351492 INFO os_vif [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16')#033[00m
Dec  3 02:02:29 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.107 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.108 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.108 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.109 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No VIF found with MAC fa:16:3e:da:35:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.110 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Using config drive#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.161 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:29 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:02:28.991 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 02:02:29 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:02:29.014 351492 DEBUG nova.virt.libvirt.vif [None req-72496262-9b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.687 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Creating config drive at /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.700 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkq1yyx9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:29 compute-0 podman[158098]: time="2025-12-03T02:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:02:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:02:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8645 "" "Go-http-client/1.1"
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.852 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkq1yyx9" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.906 351492 DEBUG nova.storage.rbd_utils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:02:29 compute-0 nova_compute[351485]: 2025-12-03 02:02:29.917 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.202 351492 DEBUG oslo_concurrency.processutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config b43e79bd-550f-42f8-9aa7-980b6bca3f70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.203 351492 INFO nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deleting local config drive /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.config because it was imported into RBD.#033[00m
Dec  3 02:02:30 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 02:02:30 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 02:02:30 compute-0 NetworkManager[48912]: <info>  [1764727350.3795] manager: (tap6b217cd3-16): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec  3 02:02:30 compute-0 kernel: tap6b217cd3-16: entered promiscuous mode
Dec  3 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00045|binding|INFO|Claiming lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for this chassis.
Dec  3 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00046|binding|INFO|6b217cd3-164a-4fb4-8eb6-f1eb3c806963: Claiming fa:16:3e:da:35:ef 192.168.0.85
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.393 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.401 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:35:ef 192.168.0.85'], port_security=['fa:16:3e:da:35:ef 192.168.0.85'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:cidrs': '192.168.0.85/24', 'neutron:device_id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=6b217cd3-164a-4fb4-8eb6-f1eb3c806963) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.403 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d bound to our chassis#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.407 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d#033[00m
Dec  3 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00047|binding|INFO|Setting lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 ovn-installed in OVS
Dec  3 02:02:30 compute-0 ovn_controller[89134]: 2025-12-03T02:02:30Z|00048|binding|INFO|Setting lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 up in Southbound
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.427 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.432 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[17a446c4-7cbf-43dc-ad09-759dcf706412]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:02:30 compute-0 systemd-machined[138558]: New machine qemu-4-instance-00000004.
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.444 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:30 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  3 02:02:30 compute-0 systemd-udevd[426170]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:02:30 compute-0 NetworkManager[48912]: <info>  [1764727350.4858] device (tap6b217cd3-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.488 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[5418c8ca-35f0-40a5-a1b1-96b88ad70408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:02:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 234 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Dec  3 02:02:30 compute-0 NetworkManager[48912]: <info>  [1764727350.4923] device (tap6b217cd3-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.494 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1b9091-e168-4c9d-8085-61d2ccc5a306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.521 351492 DEBUG nova.network.neutron [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated VIF entry in instance network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.522 351492 DEBUG nova.network.neutron [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.529 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[be9d5699-b997-4985-a8b1-2a8ad9fcbc64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.549 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[667255ae-aae1-46fc-a3df-e8eae7f3bba6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 21284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426178, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.552 351492 DEBUG oslo_concurrency.lockutils [req-fbb8825c-b083-4f6d-882e-9a9d689e7d54 req-73939166-4375-449d-b29d-f7869a003902 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.576 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6a647c0e-4e5d-4089-9814-7698902731a9]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426181, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426181, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.578 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.580 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.582 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.583 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.583 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.584 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:02:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:30.584 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.835 351492 DEBUG nova.compute.manager [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.836 351492 DEBUG oslo_concurrency.lockutils [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.837 351492 DEBUG oslo_concurrency.lockutils [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.837 351492 DEBUG oslo_concurrency.lockutils [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:30 compute-0 nova_compute[351485]: 2025-12-03 02:02:30.837 351492 DEBUG nova.compute.manager [req-84f057b5-82ff-4441-b534-98c617d1d47d req-3651b5f8-afe9-4b9c-b94c-90aafaea6de5 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Processing event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.307 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727351.3064172, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.307 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Started (Lifecycle Event)#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.310 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.337 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.342 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.350 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.356 351492 INFO nova.virt.libvirt.driver [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance spawned successfully.#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.357 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.384 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.385 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727351.306694, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.385 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.396 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.397 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.398 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.399 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.399 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.400 351492 DEBUG nova.virt.libvirt.driver [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.408 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.413 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727351.312392, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.413 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:02:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:02:31 compute-0 openstack_network_exporter[368278]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:02:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.463 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.468 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.485 351492 INFO nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 7.67 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.486 351492 DEBUG nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.495 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.552 351492 INFO nova.compute.manager [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 8.81 seconds to build instance.#033[00m
Dec  3 02:02:31 compute-0 nova_compute[351485]: 2025-12-03 02:02:31.570 351492 DEBUG oslo_concurrency.lockutils [None req-72496262-9b55-4d5f-8774-46afaccd239e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:31 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 02:02:32 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 02:02:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec  3 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.016 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.120 351492 DEBUG nova.compute.manager [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.120 351492 DEBUG oslo_concurrency.lockutils [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 DEBUG oslo_concurrency.lockutils [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 DEBUG oslo_concurrency.lockutils [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 DEBUG nova.compute.manager [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] No waiting events found dispatching network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:02:33 compute-0 nova_compute[351485]: 2025-12-03 02:02:33.121 351492 WARNING nova.compute.manager [req-658c3eec-b651-4373-a9ed-9f36abf00229 req-d2054e42-7844-4566-b956-dff534174c20 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received unexpected event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for instance with vm_state active and task_state None.#033[00m
Dec  3 02:02:34 compute-0 nova_compute[351485]: 2025-12-03 02:02:34.031 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Dec  3 02:02:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 96 op/s
Dec  3 02:02:38 compute-0 nova_compute[351485]: 2025-12-03 02:02:38.019 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 320 KiB/s wr, 65 op/s
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019266712655466563 of space, bias 1.0, pg target 0.5780013796639969 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:02:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:02:39 compute-0 nova_compute[351485]: 2025-12-03 02:02:39.036 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 320 KiB/s wr, 66 op/s
Dec  3 02:02:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 65 op/s
Dec  3 02:02:43 compute-0 nova_compute[351485]: 2025-12-03 02:02:43.022 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:44 compute-0 nova_compute[351485]: 2025-12-03 02:02:44.041 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 KiB/s wr, 55 op/s
Dec  3 02:02:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 KiB/s wr, 46 op/s
Dec  3 02:02:46 compute-0 podman[426262]: 2025-12-03 02:02:46.849259829 +0000 UTC m=+0.095291065 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:02:46 compute-0 podman[426264]: 2025-12-03 02:02:46.865591749 +0000 UTC m=+0.109775773 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:02:46 compute-0 podman[426263]: 2025-12-03 02:02:46.884233684 +0000 UTC m=+0.129139418 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec  3 02:02:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:02:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4097071292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:02:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:02:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4097071292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:02:48 compute-0 nova_compute[351485]: 2025-12-03 02:02:48.024 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.1 KiB/s wr, 1 op/s
Dec  3 02:02:49 compute-0 nova_compute[351485]: 2025-12-03 02:02:49.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.1 KiB/s wr, 1 op/s
Dec  3 02:02:51 compute-0 podman[426321]: 2025-12-03 02:02:51.877996022 +0000 UTC m=+0.128991074 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:02:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:02:53 compute-0 nova_compute[351485]: 2025-12-03 02:02:53.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:54 compute-0 nova_compute[351485]: 2025-12-03 02:02:54.049 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:54 compute-0 podman[426340]: 2025-12-03 02:02:54.89481751 +0000 UTC m=+0.139816079 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, name=ubi9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 02:02:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:58 compute-0 nova_compute[351485]: 2025-12-03 02:02:58.028 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:02:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:02:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:02:58 compute-0 podman[426363]: 2025-12-03 02:02:58.574424695 +0000 UTC m=+0.108426885 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:02:58 compute-0 podman[426361]: 2025-12-03 02:02:58.576066681 +0000 UTC m=+0.122265165 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec  3 02:02:58 compute-0 podman[426362]: 2025-12-03 02:02:58.581053481 +0000 UTC m=+0.103955399 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:02:58 compute-0 podman[426360]: 2025-12-03 02:02:58.640763993 +0000 UTC m=+0.175390641 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 02:02:59 compute-0 nova_compute[351485]: 2025-12-03 02:02:59.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:02:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:59.631 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:02:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:59.631 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:02:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:02:59.632 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:02:59 compute-0 podman[158098]: time="2025-12-03T02:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:02:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:02:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec  3 02:03:00 compute-0 ovn_controller[89134]: 2025-12-03T02:03:00Z|00049|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  3 02:03:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:03:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:03:01 compute-0 openstack_network_exporter[368278]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:03:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.383 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.422 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.422 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 52862152-12c7-4236-89c3-67750ecbed7a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.423 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.424 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.425 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.426 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.427 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.428 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "52862152-12c7-4236-89c3-67750ecbed7a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.429 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.430 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.430 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.431 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:03:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.541 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.546 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "52862152-12c7-4236-89c3-67750ecbed7a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.566 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.598 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:03:02 compute-0 nova_compute[351485]: 2025-12-03 02:03:02.625 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:03 compute-0 nova_compute[351485]: 2025-12-03 02:03:03.032 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:03 compute-0 nova_compute[351485]: 2025-12-03 02:03:03.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:03 compute-0 nova_compute[351485]: 2025-12-03 02:03:03.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.863774) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383863849, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1617, "num_deletes": 506, "total_data_size": 2145094, "memory_usage": 2186960, "flush_reason": "Manual Compaction"}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383883213, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2102342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28516, "largest_seqno": 30132, "table_properties": {"data_size": 2095204, "index_size": 3699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 17567, "raw_average_key_size": 18, "raw_value_size": 2078982, "raw_average_value_size": 2233, "num_data_blocks": 167, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727243, "oldest_key_time": 1764727243, "file_creation_time": 1764727383, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19556 microseconds, and 10317 cpu microseconds.
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.883322) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2102342 bytes OK
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.883351) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.886283) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.886305) EVENT_LOG_v1 {"time_micros": 1764727383886298, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.886327) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2136989, prev total WAL file size 2136989, number of live WAL files 2.
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.888367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2053KB)], [65(6988KB)]
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383888525, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9258680, "oldest_snapshot_seqno": -1}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4938 keys, 7433883 bytes, temperature: kUnknown
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383937677, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7433883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7401610, "index_size": 18851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124958, "raw_average_key_size": 25, "raw_value_size": 7312881, "raw_average_value_size": 1480, "num_data_blocks": 776, "num_entries": 4938, "num_filter_entries": 4938, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727383, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.937905) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7433883 bytes
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.940004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.4 rd, 151.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 5963, records dropped: 1025 output_compression: NoCompression
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.940023) EVENT_LOG_v1 {"time_micros": 1764727383940014, "job": 36, "event": "compaction_finished", "compaction_time_micros": 49141, "compaction_time_cpu_micros": 25216, "output_level": 6, "num_output_files": 1, "total_output_size": 7433883, "num_input_records": 5963, "num_output_records": 4938, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383940801, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727383942306, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.887827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:03:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:03:03.942693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.055 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.480 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.481 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:03:04 compute-0 nova_compute[351485]: 2025-12-03 02:03:04.481 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:03:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.486 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.507 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.509 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.510 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.511 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.541 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.542 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.544 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.545 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:03:06 compute-0 nova_compute[351485]: 2025-12-03 02:03:06.546 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:03:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:03:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148213523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.064 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.201 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.203 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.204 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.214 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.214 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.215 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.223 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.224 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.224 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.232 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.233 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.233 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.848 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.849 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3266MB free_disk=59.8726921081543GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.938 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.939 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.939 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.940 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.940 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:03:07 compute-0 nova_compute[351485]: 2025-12-03 02:03:07.941 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.035 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.047 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:03:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:03:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2386382906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.638 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.661 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.690 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:03:08 compute-0 nova_compute[351485]: 2025-12-03 02:03:08.690 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:03:09 compute-0 nova_compute[351485]: 2025-12-03 02:03:09.059 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:09 compute-0 ovn_controller[89134]: 2025-12-03T02:03:09Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:35:ef 192.168.0.85
Dec  3 02:03:09 compute-0 ovn_controller[89134]: 2025-12-03T02:03:09Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:35:ef 192.168.0.85
Dec  3 02:03:10 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  3 02:03:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 248 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 1020 KiB/s wr, 23 op/s
Dec  3 02:03:10 compute-0 nova_compute[351485]: 2025-12-03 02:03:10.756 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:10 compute-0 nova_compute[351485]: 2025-12-03 02:03:10.756 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:10 compute-0 nova_compute[351485]: 2025-12-03 02:03:10.758 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:11 compute-0 nova_compute[351485]: 2025-12-03 02:03:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 252 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Dec  3 02:03:13 compute-0 nova_compute[351485]: 2025-12-03 02:03:13.040 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:03:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 716eceed-7ee6-4723-8770-a9e23b3452b4 does not exist
Dec  3 02:03:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2aee36d6-a93e-4583-89be-7346368a1835 does not exist
Dec  3 02:03:13 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 09dd795c-72fa-439c-b0a4-2fffb84c971b does not exist
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:03:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:03:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:03:14 compute-0 nova_compute[351485]: 2025-12-03 02:03:14.061 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 260 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Dec  3 02:03:14 compute-0 nova_compute[351485]: 2025-12-03 02:03:14.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:03:14 compute-0 nova_compute[351485]: 2025-12-03 02:03:14.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.759430636 +0000 UTC m=+0.122608229 container create aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.720607761 +0000 UTC m=+0.083785384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:03:14 compute-0 systemd[1]: Started libpod-conmon-aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf.scope.
Dec  3 02:03:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.878277337 +0000 UTC m=+0.241454930 container init aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.890908323 +0000 UTC m=+0.254085946 container start aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.897914661 +0000 UTC m=+0.261092254 container attach aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:03:14 compute-0 hardcore_germain[426772]: 167 167
Dec  3 02:03:14 compute-0 systemd[1]: libpod-aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf.scope: Deactivated successfully.
Dec  3 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.905520365 +0000 UTC m=+0.268697958 container died aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5124f10ae54020370f8d118fd6da6e6cbc868aa0ce695d883904873803b480-merged.mount: Deactivated successfully.
Dec  3 02:03:14 compute-0 podman[426758]: 2025-12-03 02:03:14.971047453 +0000 UTC m=+0.334225056 container remove aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:03:14 compute-0 systemd[1]: libpod-conmon-aa5bd88e644b44b46059ea86a30fe7a73db0c67796666a0f1a053df00cf0d0bf.scope: Deactivated successfully.
Dec  3 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.240960504 +0000 UTC m=+0.078892295 container create d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 02:03:15 compute-0 systemd[1]: Started libpod-conmon-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope.
Dec  3 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.218957064 +0000 UTC m=+0.056888895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:03:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.389670918 +0000 UTC m=+0.227602729 container init d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.4078186 +0000 UTC m=+0.245750391 container start d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:03:15 compute-0 podman[426797]: 2025-12-03 02:03:15.412387368 +0000 UTC m=+0.250319159 container attach d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:03:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 02:03:16 compute-0 frosty_kare[426813]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:03:16 compute-0 frosty_kare[426813]: --> relative data size: 1.0
Dec  3 02:03:16 compute-0 frosty_kare[426813]: --> All data devices are unavailable
Dec  3 02:03:16 compute-0 systemd[1]: libpod-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope: Deactivated successfully.
Dec  3 02:03:16 compute-0 systemd[1]: libpod-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope: Consumed 1.394s CPU time.
Dec  3 02:03:17 compute-0 podman[426843]: 2025-12-03 02:03:17.036342412 +0000 UTC m=+0.081870179 container died d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 02:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-87f06be9773404e3220616ef845a31e21a054c71a324a3ac34d9542e8f9eb134-merged.mount: Deactivated successfully.
Dec  3 02:03:17 compute-0 podman[426842]: 2025-12-03 02:03:17.113518738 +0000 UTC m=+0.145226806 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:03:17 compute-0 podman[426845]: 2025-12-03 02:03:17.114032213 +0000 UTC m=+0.145168705 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec  3 02:03:17 compute-0 podman[426846]: 2025-12-03 02:03:17.121945986 +0000 UTC m=+0.133692401 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:03:17 compute-0 podman[426843]: 2025-12-03 02:03:17.136168827 +0000 UTC m=+0.181696524 container remove d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 02:03:17 compute-0 systemd[1]: libpod-conmon-d86ec00e9cb082b51c5b7df6a5a8dfe7c70e93626e7fd92fac91ead295b0314f.scope: Deactivated successfully.
Dec  3 02:03:18 compute-0 nova_compute[351485]: 2025-12-03 02:03:18.041 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.322005555 +0000 UTC m=+0.082765135 container create 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.285910777 +0000 UTC m=+0.046670407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:03:18 compute-0 systemd[1]: Started libpod-conmon-77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0.scope.
Dec  3 02:03:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.491413903 +0000 UTC m=+0.252173543 container init 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.509213534 +0000 UTC m=+0.269973114 container start 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.515856352 +0000 UTC m=+0.276615972 container attach 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Dec  3 02:03:18 compute-0 heuristic_varahamihira[427064]: 167 167
Dec  3 02:03:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 02:03:18 compute-0 systemd[1]: libpod-77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0.scope: Deactivated successfully.
Dec  3 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.522235941 +0000 UTC m=+0.282995531 container died 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:03:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fea1bc0c872037e6b536d39640be278c45bc8af5e3fd67d1fee09933ee627f6-merged.mount: Deactivated successfully.
Dec  3 02:03:18 compute-0 podman[427049]: 2025-12-03 02:03:18.599627184 +0000 UTC m=+0.360386774 container remove 77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:03:18 compute-0 systemd[1]: libpod-conmon-77a804ff02e9104cce832fcc913332cb0b135c1bf57bbb6998dad722cfd534d0.scope: Deactivated successfully.
Dec  3 02:03:18 compute-0 podman[427087]: 2025-12-03 02:03:18.908291278 +0000 UTC m=+0.102770929 container create 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 02:03:18 compute-0 podman[427087]: 2025-12-03 02:03:18.878874668 +0000 UTC m=+0.073354319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:03:18 compute-0 systemd[1]: Started libpod-conmon-639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28.scope.
Dec  3 02:03:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.063662219 +0000 UTC m=+0.258141860 container init 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:03:19 compute-0 nova_compute[351485]: 2025-12-03 02:03:19.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.084819035 +0000 UTC m=+0.279298656 container start 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.090410333 +0000 UTC m=+0.284889994 container attach 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.506 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.507 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.507 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.520 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.526 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.531 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 02:03:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:19.533 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 02:03:19 compute-0 admiring_mayer[427103]: {
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:    "0": [
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:        {
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "devices": [
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "/dev/loop3"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            ],
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_name": "ceph_lv0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_size": "21470642176",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "name": "ceph_lv0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "tags": {
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cluster_name": "ceph",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.crush_device_class": "",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.encrypted": "0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osd_id": "0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.type": "block",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.vdo": "0"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            },
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "type": "block",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "vg_name": "ceph_vg0"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:        }
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:    ],
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:    "1": [
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:        {
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "devices": [
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "/dev/loop4"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            ],
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_name": "ceph_lv1",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_size": "21470642176",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "name": "ceph_lv1",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "tags": {
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cluster_name": "ceph",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.crush_device_class": "",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.encrypted": "0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osd_id": "1",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.type": "block",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.vdo": "0"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            },
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "type": "block",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "vg_name": "ceph_vg1"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:        }
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:    ],
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:    "2": [
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:        {
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "devices": [
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "/dev/loop5"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            ],
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_name": "ceph_lv2",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_size": "21470642176",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "name": "ceph_lv2",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "tags": {
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.cluster_name": "ceph",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.crush_device_class": "",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.encrypted": "0",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osd_id": "2",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.type": "block",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:                "ceph.vdo": "0"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            },
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "type": "block",
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:            "vg_name": "ceph_vg2"
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:        }
Dec  3 02:03:19 compute-0 admiring_mayer[427103]:    ]
Dec  3 02:03:19 compute-0 admiring_mayer[427103]: }
Dec  3 02:03:19 compute-0 systemd[1]: libpod-639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28.scope: Deactivated successfully.
Dec  3 02:03:19 compute-0 podman[427087]: 2025-12-03 02:03:19.926919151 +0000 UTC m=+1.121398812 container died 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1de98bb237e139714bcbb6352e37f27242fb2e95929de7fa91ad40d360573efe-merged.mount: Deactivated successfully.
Dec  3 02:03:20 compute-0 podman[427087]: 2025-12-03 02:03:20.022841536 +0000 UTC m=+1.217321157 container remove 639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mayer, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 02:03:20 compute-0 systemd[1]: libpod-conmon-639232b947389517e4ec041a572e7ef8570cd302c1d4cbd00a318138e5428d28.scope: Deactivated successfully.
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.479 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Wed, 03 Dec 2025 02:03:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-729513d0-32b1-4d68-aaef-ee1337233879 x-openstack-request-id: req-729513d0-32b1-4d68-aaef-ee1337233879 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.479 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b43e79bd-550f-42f8-9aa7-980b6bca3f70", "name": "vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {"metering.server_group": "0f6ab671-23df-4a6d-9613-02f9fb5fb294"}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "466cf0db-c3be-4d70-b9f3-08c056c2cad9", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/466cf0db-c3be-4d70-b9f3-08c056c2cad9"}]}, "flavor": {"id": "bc665ec6-3672-4e52-a447-5267b04e227a", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bc665ec6-3672-4e52-a447-5267b04e227a"}]}, "created": "2025-12-03T02:02:21Z", "updated": "2025-12-03T02:02:31Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.85", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:35:ef"}, {"version": 4, "addr": "192.168.122.232", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:35:ef"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:02:31.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.479 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b43e79bd-550f-42f8-9aa7-980b6bca3f70 used request id req-729513d0-32b1-4d68-aaef-ee1337233879 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.481 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.484 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:03:20.485302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.520 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.549 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.589 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 49.73046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.621 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:03:20.622685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.628 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.632 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.637 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b43e79bd-550f-42f8-9aa7-980b6bca3f70 / tap6b217cd3-16 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.637 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.643 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:03:20.644288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.645 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.645 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:03:20.647343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.648 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:03:20.650013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.650 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.651 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.651 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.651 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.652 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:03:20.653961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.654 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.655 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.655 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.655 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:03:20.656776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.683 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.684 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.684 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.710 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.711 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.711 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.741 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.742 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.742 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.769 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.769 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.770 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.770 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:03:20.771404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.771 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>]
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.772 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:03:20.772896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.843 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.844 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.844 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.925 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.926 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.927 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.987 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.988 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:20.988 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.038 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.038 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.038 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.040 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:03:21.039919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:03:21.042150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.042 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.043 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.044 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.045 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.046 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:03:21.046295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.047 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.048 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.049 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.049 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.051 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:03:21.051794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.052 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.053 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:03:21.053801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.054 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.055 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.056 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:03:21.058037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.059 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41689088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.060 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.061 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.061 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6998528252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.062 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:03:21.062432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5579657720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.063 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 7883313820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.064 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.065 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.065 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.065 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:03:21.066740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.067 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.068 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.069 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.069 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.069 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:03:21.071324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 345190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 36000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 36880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 40530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 7568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.075 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.077 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.078 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:03:21.072964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:03:21.074375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:03:21.075449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:03:21.076999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:03:21.080600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:03:21.082071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:03:21.083207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.083 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.084 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw>]
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:03:21.084733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:03:21.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.091565681 +0000 UTC m=+0.083175905 container create d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Dec  3 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.049887326 +0000 UTC m=+0.041497590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:03:21 compute-0 systemd[1]: Started libpod-conmon-d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92.scope.
Dec  3 02:03:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.21352406 +0000 UTC m=+0.205193796 container init d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.229268084 +0000 UTC m=+0.220878308 container start d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.234051429 +0000 UTC m=+0.225661693 container attach d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:03:21 compute-0 nifty_shamir[427275]: 167 167
Dec  3 02:03:21 compute-0 systemd[1]: libpod-d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92.scope: Deactivated successfully.
Dec  3 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.242444866 +0000 UTC m=+0.234055120 container died d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:03:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8985b32740d7e9dd1a24e5e59f1d7d227071ede19b7748ec0c9876364e037e15-merged.mount: Deactivated successfully.
Dec  3 02:03:21 compute-0 podman[427260]: 2025-12-03 02:03:21.325388124 +0000 UTC m=+0.316998388 container remove d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:03:21 compute-0 systemd[1]: libpod-conmon-d90743f8fb1ffb8677621860876f257df1639f810b2d0ed7ae92498247321f92.scope: Deactivated successfully.
Dec  3 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.637155966 +0000 UTC m=+0.100191257 container create 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.602111487 +0000 UTC m=+0.065146838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:03:21 compute-0 systemd[1]: Started libpod-conmon-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope.
Dec  3 02:03:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.823180721 +0000 UTC m=+0.286216062 container init 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.848346351 +0000 UTC m=+0.311381642 container start 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 02:03:21 compute-0 podman[427298]: 2025-12-03 02:03:21.855899254 +0000 UTC m=+0.318934525 container attach 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:03:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 499 KiB/s wr, 34 op/s
Dec  3 02:03:22 compute-0 podman[427332]: 2025-12-03 02:03:22.842527735 +0000 UTC m=+0.090670227 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:03:22 compute-0 friendly_bell[427315]: {
Dec  3 02:03:22 compute-0 friendly_bell[427315]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "osd_id": 2,
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "type": "bluestore"
Dec  3 02:03:22 compute-0 friendly_bell[427315]:    },
Dec  3 02:03:22 compute-0 friendly_bell[427315]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "osd_id": 1,
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "type": "bluestore"
Dec  3 02:03:22 compute-0 friendly_bell[427315]:    },
Dec  3 02:03:22 compute-0 friendly_bell[427315]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "osd_id": 0,
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:03:22 compute-0 friendly_bell[427315]:        "type": "bluestore"
Dec  3 02:03:22 compute-0 friendly_bell[427315]:    }
Dec  3 02:03:22 compute-0 friendly_bell[427315]: }
Dec  3 02:03:23 compute-0 systemd[1]: libpod-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope: Deactivated successfully.
Dec  3 02:03:23 compute-0 podman[427298]: 2025-12-03 02:03:23.021260415 +0000 UTC m=+1.484295686 container died 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:03:23 compute-0 systemd[1]: libpod-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope: Consumed 1.157s CPU time.
Dec  3 02:03:23 compute-0 nova_compute[351485]: 2025-12-03 02:03:23.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b282e5a0651e0577e0d8c47e4860cd1ac4868632574da25fec5820bc48e6cf7-merged.mount: Deactivated successfully.
Dec  3 02:03:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:23 compute-0 podman[427298]: 2025-12-03 02:03:23.101170899 +0000 UTC m=+1.564206170 container remove 7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 02:03:23 compute-0 systemd[1]: libpod-conmon-7ba0bfae0c73c5df5ec170c907f7bd021e88e80367a77c6070e0ea1ca7cf992f.scope: Deactivated successfully.
Dec  3 02:03:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:03:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:03:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:03:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:03:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 185de5cd-be34-414f-8d39-a2662890bc86 does not exist
Dec  3 02:03:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fe7db48c-44f3-4bf4-a70d-adc40731b2de does not exist
Dec  3 02:03:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:03:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:03:24 compute-0 nova_compute[351485]: 2025-12-03 02:03:24.079 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 72 KiB/s wr, 13 op/s
Dec  3 02:03:25 compute-0 podman[427433]: 2025-12-03 02:03:25.909066486 +0000 UTC m=+0.155264630 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, name=ubi9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 02:03:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 58 KiB/s wr, 11 op/s
Dec  3 02:03:28 compute-0 nova_compute[351485]: 2025-12-03 02:03:28.047 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:03:28
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'default.rgw.meta', 'backups']
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec  3 02:03:28 compute-0 podman[427454]: 2025-12-03 02:03:28.886141273 +0000 UTC m=+0.119361967 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal)
Dec  3 02:03:28 compute-0 podman[427456]: 2025-12-03 02:03:28.893230693 +0000 UTC m=+0.108296355 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec  3 02:03:28 compute-0 podman[427455]: 2025-12-03 02:03:28.904721977 +0000 UTC m=+0.130242434 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:03:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:03:28 compute-0 podman[427453]: 2025-12-03 02:03:28.96227081 +0000 UTC m=+0.202009048 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 02:03:29 compute-0 nova_compute[351485]: 2025-12-03 02:03:29.085 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:29 compute-0 podman[158098]: time="2025-12-03T02:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:03:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:03:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Dec  3 02:03:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec  3 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:03:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:03:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:03:31 compute-0 openstack_network_exporter[368278]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:03:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:33 compute-0 nova_compute[351485]: 2025-12-03 02:03:33.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:34 compute-0 nova_compute[351485]: 2025-12-03 02:03:34.089 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:38 compute-0 nova_compute[351485]: 2025-12-03 02:03:38.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:03:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:39 compute-0 nova_compute[351485]: 2025-12-03 02:03:39.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:43 compute-0 nova_compute[351485]: 2025-12-03 02:03:43.057 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:44 compute-0 nova_compute[351485]: 2025-12-03 02:03:44.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:03:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:03:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377572860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:03:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:03:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377572860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:03:47 compute-0 podman[427540]: 2025-12-03 02:03:47.870211137 +0000 UTC m=+0.108840060 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:03:47 compute-0 podman[427542]: 2025-12-03 02:03:47.907851598 +0000 UTC m=+0.140549934 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:03:47 compute-0 podman[427541]: 2025-12-03 02:03:47.907948231 +0000 UTC m=+0.136855750 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  3 02:03:48 compute-0 nova_compute[351485]: 2025-12-03 02:03:48.059 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:03:49 compute-0 nova_compute[351485]: 2025-12-03 02:03:49.103 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:03:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:03:53 compute-0 nova_compute[351485]: 2025-12-03 02:03:53.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:53 compute-0 podman[427598]: 2025-12-03 02:03:53.905983966 +0000 UTC m=+0.155311191 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 02:03:54 compute-0 nova_compute[351485]: 2025-12-03 02:03:54.108 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:03:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:03:56 compute-0 podman[427618]: 2025-12-03 02:03:56.874803141 +0000 UTC m=+0.129740848 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, version=9.4, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc.)
Dec  3 02:03:58 compute-0 nova_compute[351485]: 2025-12-03 02:03:58.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:03:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:03:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:03:59 compute-0 nova_compute[351485]: 2025-12-03 02:03:59.110 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:03:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:03:59.632 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:03:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:03:59.633 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:03:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:03:59.634 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:03:59 compute-0 podman[158098]: time="2025-12-03T02:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:03:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:03:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Dec  3 02:03:59 compute-0 podman[427638]: 2025-12-03 02:03:59.88197216 +0000 UTC m=+0.101022770 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:03:59 compute-0 podman[427637]: 2025-12-03 02:03:59.888852554 +0000 UTC m=+0.123807173 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git)
Dec  3 02:03:59 compute-0 podman[427639]: 2025-12-03 02:03:59.890422778 +0000 UTC m=+0.098381485 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  3 02:03:59 compute-0 podman[427636]: 2025-12-03 02:03:59.926774613 +0000 UTC m=+0.156098723 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:04:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:04:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:04:01 compute-0 openstack_network_exporter[368278]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:04:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:04:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:02 compute-0 nova_compute[351485]: 2025-12-03 02:04:02.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:03 compute-0 nova_compute[351485]: 2025-12-03 02:04:03.069 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.114 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.633 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.634 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.634 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.635 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:04:04 compute-0 nova_compute[351485]: 2025-12-03 02:04:04.636 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:04:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:04:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506268759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.184 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.331 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.332 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.333 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.338 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.338 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.339 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.343 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.343 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.344 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.351 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.351 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.352 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.875 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.876 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3198MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.877 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:04:05 compute-0 nova_compute[351485]: 2025-12-03 02:04:05.877 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.037 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.037 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.037 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.038 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.038 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.039 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.157 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:04:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:04:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999662885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.672 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.686 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.712 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.715 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:04:06 compute-0 nova_compute[351485]: 2025-12-03 02:04:06.715 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.717 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.746 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.747 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:04:07 compute-0 nova_compute[351485]: 2025-12-03 02:04:07.748 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.562 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.564 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.565 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:04:08 compute-0 nova_compute[351485]: 2025-12-03 02:04:08.565 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:04:09 compute-0 nova_compute[351485]: 2025-12-03 02:04:09.117 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.892 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.910 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.910 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.912 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.912 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:10 compute-0 nova_compute[351485]: 2025-12-03 02:04:10.913 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:13 compute-0 nova_compute[351485]: 2025-12-03 02:04:13.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:13 compute-0 nova_compute[351485]: 2025-12-03 02:04:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:13 compute-0 nova_compute[351485]: 2025-12-03 02:04:13.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:14 compute-0 nova_compute[351485]: 2025-12-03 02:04:14.122 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  3 02:04:16 compute-0 nova_compute[351485]: 2025-12-03 02:04:16.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:04:16 compute-0 nova_compute[351485]: 2025-12-03 02:04:16.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:04:18 compute-0 nova_compute[351485]: 2025-12-03 02:04:18.080 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  3 02:04:18 compute-0 podman[427771]: 2025-12-03 02:04:18.877000513 +0000 UTC m=+0.110157757 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:04:18 compute-0 podman[427769]: 2025-12-03 02:04:18.881445038 +0000 UTC m=+0.123769211 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  3 02:04:18 compute-0 podman[427770]: 2025-12-03 02:04:18.884736871 +0000 UTC m=+0.123704799 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  3 02:04:19 compute-0 nova_compute[351485]: 2025-12-03 02:04:19.125 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  3 02:04:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  3 02:04:23 compute-0 nova_compute[351485]: 2025-12-03 02:04:23.084 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:24 compute-0 nova_compute[351485]: 2025-12-03 02:04:24.130 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:24 compute-0 podman[427927]: 2025-12-03 02:04:24.176088018 +0000 UTC m=+0.135380388 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec  3 02:04:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  3 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:04:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a9510035-7ff0-4fa4-baea-74d56f945009 does not exist
Dec  3 02:04:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 87534f33-8117-4ae1-bd1b-32577a63adb6 does not exist
Dec  3 02:04:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a55b09f-b7e0-4e4d-b039-1cf50cf4c2ea does not exist
Dec  3 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:04:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:04:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:04:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:04:25 compute-0 podman[428115]: 2025-12-03 02:04:25.878267437 +0000 UTC m=+0.097104029 container create 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 02:04:25 compute-0 podman[428115]: 2025-12-03 02:04:25.848159928 +0000 UTC m=+0.066996600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:04:25 compute-0 systemd[1]: Started libpod-conmon-81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703.scope.
Dec  3 02:04:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.048791825 +0000 UTC m=+0.267628447 container init 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.060824415 +0000 UTC m=+0.279661007 container start 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.06599626 +0000 UTC m=+0.284832922 container attach 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:04:26 compute-0 silly_varahamihira[428130]: 167 167
Dec  3 02:04:26 compute-0 systemd[1]: libpod-81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703.scope: Deactivated successfully.
Dec  3 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.077243848 +0000 UTC m=+0.296080460 container died 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bdb7c5b106f6656867ef77cc8872a74927db72e470a8638e82484a5c2be15f5-merged.mount: Deactivated successfully.
Dec  3 02:04:26 compute-0 podman[428115]: 2025-12-03 02:04:26.158445997 +0000 UTC m=+0.377282599 container remove 81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:04:26 compute-0 systemd[1]: libpod-conmon-81338d2e78de5ee267350f5a847c38fc6cb63197b04d9ccda2ce54e6ebaac703.scope: Deactivated successfully.
Dec  3 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.415698812 +0000 UTC m=+0.075836230 container create f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.382372282 +0000 UTC m=+0.042509780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:04:26 compute-0 systemd[1]: Started libpod-conmon-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope.
Dec  3 02:04:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  3 02:04:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.62167506 +0000 UTC m=+0.281812568 container init f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.64436283 +0000 UTC m=+0.304500268 container start f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 02:04:26 compute-0 podman[428153]: 2025-12-03 02:04:26.651486211 +0000 UTC m=+0.311623669 container attach f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:04:27 compute-0 podman[428190]: 2025-12-03 02:04:27.882119943 +0000 UTC m=+0.126638472 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9)
Dec  3 02:04:27 compute-0 wizardly_mayer[428169]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:04:27 compute-0 wizardly_mayer[428169]: --> relative data size: 1.0
Dec  3 02:04:27 compute-0 wizardly_mayer[428169]: --> All data devices are unavailable
Dec  3 02:04:27 compute-0 systemd[1]: libpod-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope: Deactivated successfully.
Dec  3 02:04:27 compute-0 podman[428153]: 2025-12-03 02:04:27.997100445 +0000 UTC m=+1.657237853 container died f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:04:27 compute-0 systemd[1]: libpod-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope: Consumed 1.276s CPU time.
Dec  3 02:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fedce6caf64dc8dd8011f05f367f50aa409c5fcce0f0e8d9611a4b70f2f9597f-merged.mount: Deactivated successfully.
Dec  3 02:04:28 compute-0 nova_compute[351485]: 2025-12-03 02:04:28.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:28 compute-0 podman[428153]: 2025-12-03 02:04:28.099178734 +0000 UTC m=+1.759316152 container remove f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:04:28 compute-0 systemd[1]: libpod-conmon-f2d2c3d33dab846336443c45e57314047b57bd4ec55c5ed4d578c5e3ec4174a2.scope: Deactivated successfully.
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:04:28
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'images', 'vms', '.mgr']
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:04:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:04:29 compute-0 nova_compute[351485]: 2025-12-03 02:04:29.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.271057548 +0000 UTC m=+0.093182478 container create a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.235889957 +0000 UTC m=+0.058014937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:04:29 compute-0 systemd[1]: Started libpod-conmon-a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8.scope.
Dec  3 02:04:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.427300694 +0000 UTC m=+0.249425664 container init a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.444623403 +0000 UTC m=+0.266748293 container start a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.449819269 +0000 UTC m=+0.271944159 container attach a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:04:29 compute-0 great_mcclintock[428385]: 167 167
Dec  3 02:04:29 compute-0 systemd[1]: libpod-a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8.scope: Deactivated successfully.
Dec  3 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.459380419 +0000 UTC m=+0.281505339 container died a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 02:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9142a6e55d6191f8e89ce0678f6b4f8cda0591df01da98e1e828e300d9c4098-merged.mount: Deactivated successfully.
Dec  3 02:04:29 compute-0 podman[428370]: 2025-12-03 02:04:29.535184396 +0000 UTC m=+0.357309286 container remove a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:04:29 compute-0 systemd[1]: libpod-conmon-a65905a91419b8ce7f5225daf1c88c6a78a8a3fd489dddb26af7ebfeafb15dc8.scope: Deactivated successfully.
Dec  3 02:04:29 compute-0 podman[158098]: time="2025-12-03T02:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:04:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:04:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec  3 02:04:29 compute-0 podman[428411]: 2025-12-03 02:04:29.840482425 +0000 UTC m=+0.082003854 container create e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:04:29 compute-0 podman[428411]: 2025-12-03 02:04:29.809874252 +0000 UTC m=+0.051395751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:04:29 compute-0 systemd[1]: Started libpod-conmon-e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24.scope.
Dec  3 02:04:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.00593509 +0000 UTC m=+0.247456539 container init e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.034491896 +0000 UTC m=+0.276013335 container start e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.043304614 +0000 UTC m=+0.284826043 container attach e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:04:30 compute-0 podman[428431]: 2025-12-03 02:04:30.095786574 +0000 UTC m=+0.113460710 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Dec  3 02:04:30 compute-0 podman[428428]: 2025-12-03 02:04:30.094973731 +0000 UTC m=+0.123070131 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1755695350, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:04:30 compute-0 podman[428430]: 2025-12-03 02:04:30.096110233 +0000 UTC m=+0.123896124 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:04:30 compute-0 podman[428433]: 2025-12-03 02:04:30.133693293 +0000 UTC m=+0.154887239 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 02:04:30 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 02:04:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:30 compute-0 fervent_jones[428427]: {
Dec  3 02:04:30 compute-0 fervent_jones[428427]:    "0": [
Dec  3 02:04:30 compute-0 fervent_jones[428427]:        {
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "devices": [
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "/dev/loop3"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            ],
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_name": "ceph_lv0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_size": "21470642176",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "name": "ceph_lv0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "tags": {
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cluster_name": "ceph",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.crush_device_class": "",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.encrypted": "0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osd_id": "0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.type": "block",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.vdo": "0"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            },
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "type": "block",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "vg_name": "ceph_vg0"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:        }
Dec  3 02:04:30 compute-0 fervent_jones[428427]:    ],
Dec  3 02:04:30 compute-0 fervent_jones[428427]:    "1": [
Dec  3 02:04:30 compute-0 fervent_jones[428427]:        {
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "devices": [
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "/dev/loop4"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            ],
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_name": "ceph_lv1",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_size": "21470642176",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "name": "ceph_lv1",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "tags": {
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cluster_name": "ceph",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.crush_device_class": "",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.encrypted": "0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osd_id": "1",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.type": "block",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.vdo": "0"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            },
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "type": "block",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "vg_name": "ceph_vg1"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:        }
Dec  3 02:04:30 compute-0 fervent_jones[428427]:    ],
Dec  3 02:04:30 compute-0 fervent_jones[428427]:    "2": [
Dec  3 02:04:30 compute-0 fervent_jones[428427]:        {
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "devices": [
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "/dev/loop5"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            ],
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_name": "ceph_lv2",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_size": "21470642176",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "name": "ceph_lv2",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "tags": {
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.cluster_name": "ceph",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.crush_device_class": "",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.encrypted": "0",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osd_id": "2",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.type": "block",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:                "ceph.vdo": "0"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            },
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "type": "block",
Dec  3 02:04:30 compute-0 fervent_jones[428427]:            "vg_name": "ceph_vg2"
Dec  3 02:04:30 compute-0 fervent_jones[428427]:        }
Dec  3 02:04:30 compute-0 fervent_jones[428427]:    ]
Dec  3 02:04:30 compute-0 fervent_jones[428427]: }
Dec  3 02:04:30 compute-0 systemd[1]: libpod-e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24.scope: Deactivated successfully.
Dec  3 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.865596902 +0000 UTC m=+1.107118321 container died e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 02:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-54fbb0901e169415853ef5295889bf59f0e356261e16754b8feadebfcb94d86b-merged.mount: Deactivated successfully.
Dec  3 02:04:30 compute-0 podman[428411]: 2025-12-03 02:04:30.955313432 +0000 UTC m=+1.196834891 container remove e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:04:30 compute-0 systemd[1]: libpod-conmon-e92fc779932c32854eff232891fa3a792843237dde5a2f50e0f1d798f5de4f24.scope: Deactivated successfully.
Dec  3 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:04:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:04:31 compute-0 openstack_network_exporter[368278]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:04:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.039969787 +0000 UTC m=+0.102263324 container create 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.010115856 +0000 UTC m=+0.072409393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:04:32 compute-0 systemd[1]: Started libpod-conmon-7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac.scope.
Dec  3 02:04:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:04:32 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.213216292 +0000 UTC m=+0.275509839 container init 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.225279502 +0000 UTC m=+0.287573049 container start 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.232770204 +0000 UTC m=+0.295063711 container attach 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:04:32 compute-0 angry_dirac[428688]: 167 167
Dec  3 02:04:32 compute-0 systemd[1]: libpod-7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac.scope: Deactivated successfully.
Dec  3 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.239703239 +0000 UTC m=+0.301996776 container died 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af5c1600c2d78876e4bbb394e7becf2d3e602c5e08a956e820adf1a4e6de85a-merged.mount: Deactivated successfully.
Dec  3 02:04:32 compute-0 podman[428672]: 2025-12-03 02:04:32.313168831 +0000 UTC m=+0.375462338 container remove 7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:04:32 compute-0 systemd[1]: libpod-conmon-7689b14e040f9380556c928b8c7953a12b9320094852b6f67b327afa75c6d5ac.scope: Deactivated successfully.
Dec  3 02:04:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.616808813 +0000 UTC m=+0.095116413 container create 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.577215777 +0000 UTC m=+0.055523427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:04:32 compute-0 systemd[1]: Started libpod-conmon-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope.
Dec  3 02:04:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.78906941 +0000 UTC m=+0.267377050 container init 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.819797726 +0000 UTC m=+0.298105326 container start 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:04:32 compute-0 podman[428712]: 2025-12-03 02:04:32.826190437 +0000 UTC m=+0.304498077 container attach 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:04:33 compute-0 nova_compute[351485]: 2025-12-03 02:04:33.089 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:33 compute-0 bold_blackburn[428728]: {
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "osd_id": 2,
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "type": "bluestore"
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:    },
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "osd_id": 1,
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "type": "bluestore"
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:    },
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "osd_id": 0,
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:        "type": "bluestore"
Dec  3 02:04:33 compute-0 bold_blackburn[428728]:    }
Dec  3 02:04:33 compute-0 bold_blackburn[428728]: }
Dec  3 02:04:34 compute-0 systemd[1]: libpod-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope: Deactivated successfully.
Dec  3 02:04:34 compute-0 systemd[1]: libpod-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope: Consumed 1.192s CPU time.
Dec  3 02:04:34 compute-0 podman[428712]: 2025-12-03 02:04:34.0291946 +0000 UTC m=+1.507502190 container died 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a8088f1b0db2c2c0eede4ac708b79488f89951bda7e5b7f09c2c1194251e916-merged.mount: Deactivated successfully.
Dec  3 02:04:34 compute-0 podman[428712]: 2025-12-03 02:04:34.125988459 +0000 UTC m=+1.604296029 container remove 5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:04:34 compute-0 nova_compute[351485]: 2025-12-03 02:04:34.138 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:34 compute-0 systemd[1]: libpod-conmon-5b04bb3c4d7f0abe831a7d782e44c452639a6c5b3a8044a8504a30b97e53b165.scope: Deactivated successfully.
Dec  3 02:04:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:04:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:04:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:04:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:04:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 88695869-7b15-4888-bff3-f25f866866cd does not exist
Dec  3 02:04:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d0ff8397-6576-4abb-98d6-8f4ea3087834 does not exist
Dec  3 02:04:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:04:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:04:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:38 compute-0 nova_compute[351485]: 2025-12-03 02:04:38.091 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:04:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:39 compute-0 nova_compute[351485]: 2025-12-03 02:04:39.142 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:43 compute-0 nova_compute[351485]: 2025-12-03 02:04:43.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:44 compute-0 nova_compute[351485]: 2025-12-03 02:04:44.146 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:04:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13556132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:04:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:04:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13556132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:04:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:48 compute-0 nova_compute[351485]: 2025-12-03 02:04:48.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:49 compute-0 nova_compute[351485]: 2025-12-03 02:04:49.150 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:49 compute-0 podman[428825]: 2025-12-03 02:04:49.890420358 +0000 UTC m=+0.127233399 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:04:49 compute-0 podman[428826]: 2025-12-03 02:04:49.897348523 +0000 UTC m=+0.133579008 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  3 02:04:49 compute-0 podman[428827]: 2025-12-03 02:04:49.90080077 +0000 UTC m=+0.133680220 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:04:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:53 compute-0 nova_compute[351485]: 2025-12-03 02:04:53.101 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:54 compute-0 nova_compute[351485]: 2025-12-03 02:04:54.153 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:54 compute-0 podman[428884]: 2025-12-03 02:04:54.878357618 +0000 UTC m=+0.125782998 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 02:04:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:04:58 compute-0 nova_compute[351485]: 2025-12-03 02:04:58.105 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:04:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:04:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:04:58 compute-0 podman[428902]: 2025-12-03 02:04:58.88000061 +0000 UTC m=+0.133542957 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, build-date=2024-09-18T21:23:30, version=9.4, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0)
Dec  3 02:04:59 compute-0 nova_compute[351485]: 2025-12-03 02:04:59.158 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:04:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:04:59.633 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:04:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:04:59.634 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:04:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:04:59.634 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:04:59 compute-0 podman[158098]: time="2025-12-03T02:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:04:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:04:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 02:05:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:00 compute-0 podman[428925]: 2025-12-03 02:05:00.892015766 +0000 UTC m=+0.118084651 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:05:00 compute-0 podman[428924]: 2025-12-03 02:05:00.899755884 +0000 UTC m=+0.129359668 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec  3 02:05:00 compute-0 podman[428926]: 2025-12-03 02:05:00.903727106 +0000 UTC m=+0.122570827 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 02:05:00 compute-0 podman[428923]: 2025-12-03 02:05:00.931796538 +0000 UTC m=+0.169173812 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  3 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:05:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:05:01 compute-0 openstack_network_exporter[368278]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:05:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:05:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:03 compute-0 nova_compute[351485]: 2025-12-03 02:05:03.109 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:04 compute-0 nova_compute[351485]: 2025-12-03 02:05:04.164 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:04 compute-0 nova_compute[351485]: 2025-12-03 02:05:04.580 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.659 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.660 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.661 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.662 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:05:06 compute-0 nova_compute[351485]: 2025-12-03 02:05:06.663 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:05:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:05:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976053642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.258 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.408 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.409 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.410 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.422 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.423 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.424 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.433 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.434 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.434 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.443 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.443 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:07 compute-0 nova_compute[351485]: 2025-12-03 02:05:07.444 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.041 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.043 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3191MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.044 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.044 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:05:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.112 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.170 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.171 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.187 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.208 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.209 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.227 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.259 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.397 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:05:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:05:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242192799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.917 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.930 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.948 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.951 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:05:08 compute-0 nova_compute[351485]: 2025-12-03 02:05:08.952 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:05:09 compute-0 nova_compute[351485]: 2025-12-03 02:05:09.167 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:09 compute-0 nova_compute[351485]: 2025-12-03 02:05:09.954 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:09 compute-0 nova_compute[351485]: 2025-12-03 02:05:09.955 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:05:10 compute-0 nova_compute[351485]: 2025-12-03 02:05:10.503 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:05:10 compute-0 nova_compute[351485]: 2025-12-03 02:05:10.504 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:05:10 compute-0 nova_compute[351485]: 2025-12-03 02:05:10.504 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:05:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.215 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.230 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.232 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.234 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.234 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:12 compute-0 nova_compute[351485]: 2025-12-03 02:05:12.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:13 compute-0 nova_compute[351485]: 2025-12-03 02:05:13.116 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:13 compute-0 nova_compute[351485]: 2025-12-03 02:05:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:14 compute-0 nova_compute[351485]: 2025-12-03 02:05:14.172 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:18 compute-0 nova_compute[351485]: 2025-12-03 02:05:18.120 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:18 compute-0 nova_compute[351485]: 2025-12-03 02:05:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:05:18 compute-0 nova_compute[351485]: 2025-12-03 02:05:18.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:05:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:19 compute-0 nova_compute[351485]: 2025-12-03 02:05:19.176 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.506 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.507 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.508 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.520 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '52862152-12c7-4236-89c3-67750ecbed7a', 'name': 'vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.526 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.531 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.537 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.538 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:05:19.538887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.581 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.630 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.672 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.708 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:05:19.709394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.716 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.720 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.725 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.730 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:05:19.731886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.732 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.733 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.733 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.735 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:05:19.734740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.735 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.736 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.736 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:05:19.737848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.739 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.739 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.741 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:05:19.740708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.741 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:05:19.743445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.772 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.772 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.773 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.807 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.809 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.809 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.839 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.840 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.840 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.870 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.871 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.872 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.875 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.877 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.878 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:05:19.877941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.970 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.971 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:19.971 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.049 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.050 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.051 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.154 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.155 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.156 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.253 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.254 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.254 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.256 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.260 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.260 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.262 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:05:20.259996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.263 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.264 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.266 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 1829221883 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.266 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 322583639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.267 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.latency volume: 204508972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.267 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.268 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.268 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.269 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.269 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.270 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:05:20.266089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.270 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.271 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.272 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:05:20.274514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.275 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.276 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.277 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.278 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.278 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.279 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.280 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.280 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.281 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.282 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.282 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.283 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.285 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.286 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.286 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.287 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.288 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.291 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.291 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.292 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:05:20.285959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.292 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.293 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:05:20.290828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.293 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.294 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.294 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.295 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.295 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.296 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.296 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.299 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.299 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:05:20.298797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.300 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.300 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.301 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.302 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.302 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.303 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.303 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.304 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.304 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.305 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.307 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 6998528252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.308 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 29937762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:05:20.307329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.309 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.309 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5579657720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.310 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.310 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.310 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.311 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.311 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.312 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.312 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.313 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.314 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.315 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.316 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.316 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.317 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:05:20.315422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.318 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.318 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.319 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.319 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.320 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.320 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.321 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.321 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.324 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.325 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.326 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.326 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:05:20.324357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/cpu volume: 347260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.328 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 38050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.329 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 39080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.329 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 42600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:05:20.328105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes volume: 7568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:05:20.331130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:05:20.333057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.334 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.334 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.335 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.336 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.336 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:05:20.335606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.337 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.338 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.338 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.338 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.339 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.339 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.339 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:05:20.341087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.342 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.342 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:05:20.343646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 14 DEBUG ceilometer.compute.pollsters [-] 52862152-12c7-4236-89c3-67750ecbed7a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:05:20.346037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.347 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.347 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:05:20.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:05:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:20 compute-0 podman[429061]: 2025-12-03 02:05:20.887053362 +0000 UTC m=+0.130425909 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:05:20 compute-0 podman[429063]: 2025-12-03 02:05:20.884180571 +0000 UTC m=+0.114403987 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:05:20 compute-0 podman[429062]: 2025-12-03 02:05:20.918295243 +0000 UTC m=+0.151669008 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm)
Dec  3 02:05:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:23 compute-0 nova_compute[351485]: 2025-12-03 02:05:23.123 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:24 compute-0 nova_compute[351485]: 2025-12-03 02:05:24.180 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:25 compute-0 podman[429119]: 2025-12-03 02:05:25.881401126 +0000 UTC m=+0.123121083 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 02:05:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:28 compute-0 nova_compute[351485]: 2025-12-03 02:05:28.127 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:05:28
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control']
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:05:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:05:29 compute-0 nova_compute[351485]: 2025-12-03 02:05:29.184 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:29 compute-0 podman[158098]: time="2025-12-03T02:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:05:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:05:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  3 02:05:29 compute-0 podman[429139]: 2025-12-03 02:05:29.925900923 +0000 UTC m=+0.158799618 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=)
Dec  3 02:05:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:05:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:05:31 compute-0 openstack_network_exporter[368278]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:05:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:05:31 compute-0 podman[429166]: 2025-12-03 02:05:31.871461255 +0000 UTC m=+0.099805425 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:05:31 compute-0 podman[429164]: 2025-12-03 02:05:31.871587049 +0000 UTC m=+0.108129780 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:05:31 compute-0 podman[429159]: 2025-12-03 02:05:31.873104801 +0000 UTC m=+0.115026794 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Dec  3 02:05:31 compute-0 podman[429158]: 2025-12-03 02:05:31.889045321 +0000 UTC m=+0.151739300 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 02:05:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:33 compute-0 nova_compute[351485]: 2025-12-03 02:05:33.130 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:34 compute-0 nova_compute[351485]: 2025-12-03 02:05:34.188 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:36 compute-0 podman[429412]: 2025-12-03 02:05:36.06047247 +0000 UTC m=+0.168012039 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:05:36 compute-0 podman[429412]: 2025-12-03 02:05:36.183395286 +0000 UTC m=+0.290934855 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 02:05:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:05:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:05:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:38 compute-0 nova_compute[351485]: 2025-12-03 02:05:38.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d00633c2-6072-429f-b182-134fb661ac3c does not exist
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev feb6d4c1-e25d-406d-9e71-1767d6055bab does not exist
Dec  3 02:05:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cea3d146-8bc8-4f7d-8d65-8944f8afbde6 does not exist
Dec  3 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:05:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:05:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:05:39 compute-0 nova_compute[351485]: 2025-12-03 02:05:39.193 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.059836377 +0000 UTC m=+0.085404000 container create 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.023316977 +0000 UTC m=+0.048884660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:05:40 compute-0 systemd[1]: Started libpod-conmon-64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6.scope.
Dec  3 02:05:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.253135958 +0000 UTC m=+0.278703601 container init 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.264729975 +0000 UTC m=+0.290297578 container start 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.269869739 +0000 UTC m=+0.295437612 container attach 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:05:40 compute-0 quirky_heisenberg[429854]: 167 167
Dec  3 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.275063726 +0000 UTC m=+0.300631329 container died 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:05:40 compute-0 systemd[1]: libpod-64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6.scope: Deactivated successfully.
Dec  3 02:05:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-98bce0c036d65bccc4e4226934db27d0e11eea21d0116c09639b6769c062246d-merged.mount: Deactivated successfully.
Dec  3 02:05:40 compute-0 podman[429838]: 2025-12-03 02:05:40.353123197 +0000 UTC m=+0.378690800 container remove 64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_heisenberg, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:05:40 compute-0 systemd[1]: libpod-conmon-64d280e4c6e248103f0a448138b072ae40a6b2a3dfffad2e2c832e13e74cf9b6.scope: Deactivated successfully.
Dec  3 02:05:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.664161497 +0000 UTC m=+0.098192289 container create efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.631745594 +0000 UTC m=+0.065776446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:05:40 compute-0 systemd[1]: Started libpod-conmon-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope.
Dec  3 02:05:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.851345685 +0000 UTC m=+0.285376497 container init efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.87741012 +0000 UTC m=+0.311440922 container start efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:05:40 compute-0 podman[429878]: 2025-12-03 02:05:40.887402192 +0000 UTC m=+0.321433044 container attach efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:05:42 compute-0 amazing_leavitt[429894]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:05:42 compute-0 amazing_leavitt[429894]: --> relative data size: 1.0
Dec  3 02:05:42 compute-0 amazing_leavitt[429894]: --> All data devices are unavailable
Dec  3 02:05:42 compute-0 systemd[1]: libpod-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope: Deactivated successfully.
Dec  3 02:05:42 compute-0 systemd[1]: libpod-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope: Consumed 1.249s CPU time.
Dec  3 02:05:42 compute-0 podman[429878]: 2025-12-03 02:05:42.19412215 +0000 UTC m=+1.628152962 container died efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 02:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c26b2e56b943cdcf4d4228d59fc89d640dc126ffcdd801863089a46a570d5d4d-merged.mount: Deactivated successfully.
Dec  3 02:05:42 compute-0 podman[429878]: 2025-12-03 02:05:42.300219391 +0000 UTC m=+1.734250173 container remove efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 02:05:42 compute-0 systemd[1]: libpod-conmon-efa3d7ce95de1bc7319da9e35335f441a15a38319f8cb09a73f149e809e2a851.scope: Deactivated successfully.
Dec  3 02:05:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:43 compute-0 nova_compute[351485]: 2025-12-03 02:05:43.136 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.357516866 +0000 UTC m=+0.075642264 container create 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec  3 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.317037755 +0000 UTC m=+0.035163203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:05:43 compute-0 systemd[1]: Started libpod-conmon-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope.
Dec  3 02:05:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.492246245 +0000 UTC m=+0.210371603 container init 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.501708502 +0000 UTC m=+0.219833900 container start 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.507972809 +0000 UTC m=+0.226098217 container attach 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:05:43 compute-0 cool_elbakyan[430089]: 167 167
Dec  3 02:05:43 compute-0 systemd[1]: libpod-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope: Deactivated successfully.
Dec  3 02:05:43 compute-0 conmon[430089]: conmon 825616ace6ec7d8fdcae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope/container/memory.events
Dec  3 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.514494242 +0000 UTC m=+0.232619610 container died 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-68193d099d149f787ae05750eed4ab3415c454c6810de4260ee183887c98a9a7-merged.mount: Deactivated successfully.
Dec  3 02:05:43 compute-0 podman[430073]: 2025-12-03 02:05:43.569040131 +0000 UTC m=+0.287165489 container remove 825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 02:05:43 compute-0 systemd[1]: libpod-conmon-825616ace6ec7d8fdcae6081ffe88411559fe7292b1bba88ce2af4ac88a76298.scope: Deactivated successfully.
Dec  3 02:05:43 compute-0 podman[430112]: 2025-12-03 02:05:43.819364679 +0000 UTC m=+0.091327086 container create de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:05:43 compute-0 podman[430112]: 2025-12-03 02:05:43.778629291 +0000 UTC m=+0.050591738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:05:43 compute-0 systemd[1]: Started libpod-conmon-de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2.scope.
Dec  3 02:05:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:43 compute-0 podman[430112]: 2025-12-03 02:05:43.973213088 +0000 UTC m=+0.245175485 container init de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.003129362 +0000 UTC m=+0.275091729 container start de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.007619538 +0000 UTC m=+0.279581915 container attach de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 02:05:44 compute-0 nova_compute[351485]: 2025-12-03 02:05:44.198 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]: {
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:    "0": [
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:        {
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "devices": [
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "/dev/loop3"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            ],
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_name": "ceph_lv0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_size": "21470642176",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "name": "ceph_lv0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "tags": {
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cluster_name": "ceph",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.crush_device_class": "",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.encrypted": "0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osd_id": "0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.type": "block",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.vdo": "0"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            },
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "type": "block",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "vg_name": "ceph_vg0"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:        }
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:    ],
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:    "1": [
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:        {
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "devices": [
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "/dev/loop4"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            ],
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_name": "ceph_lv1",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_size": "21470642176",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "name": "ceph_lv1",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "tags": {
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cluster_name": "ceph",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.crush_device_class": "",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.encrypted": "0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osd_id": "1",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.type": "block",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.vdo": "0"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            },
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "type": "block",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "vg_name": "ceph_vg1"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:        }
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:    ],
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:    "2": [
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:        {
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "devices": [
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "/dev/loop5"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            ],
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_name": "ceph_lv2",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_size": "21470642176",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "name": "ceph_lv2",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "tags": {
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.cluster_name": "ceph",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.crush_device_class": "",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.encrypted": "0",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osd_id": "2",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.type": "block",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:                "ceph.vdo": "0"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            },
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "type": "block",
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:            "vg_name": "ceph_vg2"
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:        }
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]:    ]
Dec  3 02:05:44 compute-0 charming_dubinsky[430128]: }
Dec  3 02:05:44 compute-0 systemd[1]: libpod-de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2.scope: Deactivated successfully.
Dec  3 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.845604057 +0000 UTC m=+1.117566464 container died de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3abb959d39b9b731a146e0cbf9c5a71152cbeaf478ce6b9e31d06a3ce2027e1-merged.mount: Deactivated successfully.
Dec  3 02:05:44 compute-0 podman[430112]: 2025-12-03 02:05:44.951181335 +0000 UTC m=+1.223143702 container remove de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:05:44 compute-0 systemd[1]: libpod-conmon-de78d3b13780f2925d455882e660d136397c66fb0f53be5cbd91160feb8288e2.scope: Deactivated successfully.
Dec  3 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.164477998 +0000 UTC m=+0.108634804 container create 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.125957452 +0000 UTC m=+0.070114298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:05:46 compute-0 systemd[1]: Started libpod-conmon-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope.
Dec  3 02:05:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.336463788 +0000 UTC m=+0.280620634 container init 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.357515001 +0000 UTC m=+0.301671807 container start 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.365153657 +0000 UTC m=+0.309310523 container attach 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:05:46 compute-0 inspiring_shirley[430304]: 167 167
Dec  3 02:05:46 compute-0 systemd[1]: libpod-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope: Deactivated successfully.
Dec  3 02:05:46 compute-0 conmon[430304]: conmon 364cbeb1b9795848af09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope/container/memory.events
Dec  3 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.373628656 +0000 UTC m=+0.317785462 container died 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0859741c3500c8cd6e7c117830e9223b6be69123cdd51fd8ecf0f8c1490b876a-merged.mount: Deactivated successfully.
Dec  3 02:05:46 compute-0 podman[430290]: 2025-12-03 02:05:46.447648753 +0000 UTC m=+0.391805529 container remove 364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:05:46 compute-0 systemd[1]: libpod-conmon-364cbeb1b9795848af09c4bfd9de598aaeb4ed7ad74360bfa39b0f3118601f6f.scope: Deactivated successfully.
Dec  3 02:05:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.762154032 +0000 UTC m=+0.097699406 container create e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.717180283 +0000 UTC m=+0.052725707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:05:46 compute-0 systemd[1]: Started libpod-conmon-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope.
Dec  3 02:05:46 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.915194387 +0000 UTC m=+0.250739821 container init e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.929900622 +0000 UTC m=+0.265445986 container start e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 02:05:46 compute-0 podman[430329]: 2025-12-03 02:05:46.936239851 +0000 UTC m=+0.271785265 container attach e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:05:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:05:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903521338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:05:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:05:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903521338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]: {
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "osd_id": 2,
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "type": "bluestore"
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:    },
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "osd_id": 1,
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "type": "bluestore"
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:    },
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "osd_id": 0,
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:        "type": "bluestore"
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]:    }
Dec  3 02:05:47 compute-0 vigorous_shannon[430345]: }
Dec  3 02:05:48 compute-0 systemd[1]: libpod-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope: Deactivated successfully.
Dec  3 02:05:48 compute-0 systemd[1]: libpod-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope: Consumed 1.097s CPU time.
Dec  3 02:05:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:48 compute-0 podman[430378]: 2025-12-03 02:05:48.111866661 +0000 UTC m=+0.055598029 container died e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:05:48 compute-0 nova_compute[351485]: 2025-12-03 02:05:48.138 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe750ebacef5dc113542dcb12557ec98ea81008a231dcd082cac72ab000fe1d-merged.mount: Deactivated successfully.
Dec  3 02:05:48 compute-0 podman[430378]: 2025-12-03 02:05:48.212755626 +0000 UTC m=+0.156486914 container remove e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shannon, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 02:05:48 compute-0 systemd[1]: libpod-conmon-e71e499a1a84e65d0a69fcb832afd1134561d05be36b771461ebff70251811b1.scope: Deactivated successfully.
Dec  3 02:05:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:05:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:05:48 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:48 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 23d7452f-a0db-41b6-9dba-56033c6f0700 does not exist
Dec  3 02:05:48 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 398664ad-b489-41eb-9640-bcff1c9b1f27 does not exist
Dec  3 02:05:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:49 compute-0 nova_compute[351485]: 2025-12-03 02:05:49.202 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:49 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:05:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:51 compute-0 podman[430445]: 2025-12-03 02:05:51.879891644 +0000 UTC m=+0.116406194 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:05:51 compute-0 podman[430443]: 2025-12-03 02:05:51.887018125 +0000 UTC m=+0.129511494 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  3 02:05:51 compute-0 podman[430444]: 2025-12-03 02:05:51.922600238 +0000 UTC m=+0.158475420 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true)
Dec  3 02:05:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:53 compute-0 nova_compute[351485]: 2025-12-03 02:05:53.143 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:54 compute-0 nova_compute[351485]: 2025-12-03 02:05:54.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:56 compute-0 podman[430499]: 2025-12-03 02:05:56.877203051 +0000 UTC m=+0.129040740 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:05:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:05:58 compute-0 nova_compute[351485]: 2025-12-03 02:05:58.145 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:05:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:05:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:05:59 compute-0 nova_compute[351485]: 2025-12-03 02:05:59.209 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:05:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:05:59.635 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:05:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:05:59.635 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:05:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:05:59.637 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:05:59 compute-0 podman[158098]: time="2025-12-03T02:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:05:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:05:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Dec  3 02:06:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:00 compute-0 podman[430518]: 2025-12-03 02:06:00.896203171 +0000 UTC m=+0.140397000 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:06:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:06:01 compute-0 openstack_network_exporter[368278]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:06:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:06:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:02 compute-0 podman[430541]: 2025-12-03 02:06:02.904801429 +0000 UTC m=+0.115856308 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 02:06:02 compute-0 podman[430539]: 2025-12-03 02:06:02.912065844 +0000 UTC m=+0.147252053 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9)
Dec  3 02:06:02 compute-0 podman[430540]: 2025-12-03 02:06:02.91619854 +0000 UTC m=+0.142027976 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:06:02 compute-0 podman[430538]: 2025-12-03 02:06:02.936461742 +0000 UTC m=+0.172731942 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec  3 02:06:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:03 compute-0 nova_compute[351485]: 2025-12-03 02:06:03.149 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:04 compute-0 nova_compute[351485]: 2025-12-03 02:06:04.213 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:05 compute-0 nova_compute[351485]: 2025-12-03 02:06:05.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.639 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.641 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:06:06 compute-0 nova_compute[351485]: 2025-12-03 02:06:06.642 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:06:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:06:07 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301704655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.164 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.395 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.395 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.397 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.405 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.406 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.407 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.416 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.417 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.418 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.426 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.427 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:07 compute-0 nova_compute[351485]: 2025-12-03 02:06:07.427 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.083 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.086 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3169MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.087 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.088 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.153 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.232 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.233 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 52862152-12c7-4236-89c3-67750ecbed7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.234 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.234 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.235 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.235 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.390 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:06:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:06:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608886054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.971 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:06:08 compute-0 nova_compute[351485]: 2025-12-03 02:06:08.986 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.005 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.009 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.010 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:09 compute-0 nova_compute[351485]: 2025-12-03 02:06:09.218 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.423 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.424 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.426 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.427 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.427 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.431 351492 INFO nova.compute.manager [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Terminating instance#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.433 351492 DEBUG nova.compute.manager [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:06:10 compute-0 kernel: tap521d2181-8f (unregistering): left promiscuous mode
Dec  3 02:06:10 compute-0 NetworkManager[48912]: <info>  [1764727570.6326] device (tap521d2181-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:06:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:10 compute-0 ovn_controller[89134]: 2025-12-03T02:06:10Z|00050|binding|INFO|Releasing lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 from this chassis (sb_readonly=0)
Dec  3 02:06:10 compute-0 ovn_controller[89134]: 2025-12-03T02:06:10Z|00051|binding|INFO|Setting lport 521d2181-8f17-4f4d-a3a6-98de1e17b734 down in Southbound
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.650 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 ovn_controller[89134]: 2025-12-03T02:06:10Z|00052|binding|INFO|Removing iface tap521d2181-8f ovn-installed in OVS
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.655 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.666 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:09:91 192.168.0.178'], port_security=['fa:16:3e:8e:09:91 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '52862152-12c7-4236-89c3-67750ecbed7a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-ppxv5rwaptjv-bbqmylrxhl37-port-ucken5qvu3kv', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=521d2181-8f17-4f4d-a3a6-98de1e17b734) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.668 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 521d2181-8f17-4f4d-a3a6-98de1e17b734 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.671 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.697 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.707 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[db9b232e-e430-4a23-b457-b8dea94026f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:06:10 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 13.793s CPU time.
Dec  3 02:06:10 compute-0 systemd-machined[138558]: Machine qemu-2-instance-00000002 terminated.
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.752 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[50598518-a71e-41c9-80a3-64089deb22c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.757 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[d5787ff1-ed45-483b-bf42-9751b5ef6393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.799 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e73e6f82-da13-475f-a42d-0c35c206b48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.829 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[88efeb78-5965-4718-9061-946abe76573a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 15808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430680, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.860 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d768aa1d-c769-41ed-9a7b-02b5a2ee11ba]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430681, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430681, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.866 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.870 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.881 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.883 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.884 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.886 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:06:10 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:10.887 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.890 351492 INFO nova.virt.libvirt.driver [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Instance destroyed successfully.#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.891 351492 DEBUG nova.objects.instance [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 52862152-12c7-4236-89c3-67750ecbed7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.908 351492 DEBUG nova.virt.libvirt.vif [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T01:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-ppxv5rwaptjv-bbqmylrxhl37-vnf-x65t7efzpd2l',id=2,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T01:56:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-eunmeq81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T01:56:06Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  3 02:06:10 compute-0 nova_compute[351485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk2MTkwMzY3OTMwODQ0NTg3OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5NjE5MDM2NzkzMDg0NDU4Nzk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTYxOTAzNjc5MzA4NDQ1ODc5PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=52862152-12c7-4236-89c3-67750ecbed7a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.908 351492 DEBUG nova.network.os_vif_util [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.909 351492 DEBUG nova.network.os_vif_util [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.910 351492 DEBUG os_vif [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.913 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.914 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap521d2181-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.916 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.918 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.923 351492 INFO os_vif [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:09:91,bridge_name='br-int',has_traffic_filtering=True,id=521d2181-8f17-4f4d-a3a6-98de1e17b734,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap521d2181-8f')#033[00m
Dec  3 02:06:10 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:06:10.908 351492 DEBUG nova.virt.libvirt.vif [None req-cf0a54e0-d2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.979 351492 DEBUG nova.compute.manager [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-unplugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.980 351492 DEBUG oslo_concurrency.lockutils [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.981 351492 DEBUG oslo_concurrency.lockutils [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.982 351492 DEBUG oslo_concurrency.lockutils [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.982 351492 DEBUG nova.compute.manager [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] No waiting events found dispatching network-vif-unplugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:06:10 compute-0 nova_compute[351485]: 2025-12-03 02:06:10.986 351492 DEBUG nova.compute.manager [req-6e2aefb9-bcb1-4420-b0ca-516ef8a6ac68 req-782be09b-6f98-4aac-890c-ce5497aba7a8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-unplugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.004 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:11.011 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:06:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:11.012 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.016 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.038 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.038 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.294 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.295 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.295 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.745 351492 DEBUG nova.compute.manager [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.745 351492 DEBUG nova.compute.manager [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing instance network info cache due to event network-changed-521d2181-8f17-4f4d-a3a6-98de1e17b734. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.746 351492 DEBUG oslo_concurrency.lockutils [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.747 351492 DEBUG oslo_concurrency.lockutils [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:06:11 compute-0 nova_compute[351485]: 2025-12-03 02:06:11.748 351492 DEBUG nova.network.neutron [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Refreshing network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.273 351492 INFO nova.virt.libvirt.driver [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deleting instance files /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a_del#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.275 351492 INFO nova.virt.libvirt.driver [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deletion of /var/lib/nova/instances/52862152-12c7-4236-89c3-67750ecbed7a_del complete#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.368 351492 DEBUG nova.virt.libvirt.host [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.369 351492 INFO nova.virt.libvirt.host [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] UEFI support detected#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.372 351492 INFO nova.compute.manager [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 1.94 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.373 351492 DEBUG oslo.service.loopingcall [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.373 351492 DEBUG nova.compute.manager [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:06:12 compute-0 nova_compute[351485]: 2025-12-03 02:06:12.373 351492 DEBUG nova.network.neutron [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:06:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 4 op/s
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.068 351492 DEBUG nova.compute.manager [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.069 351492 DEBUG oslo_concurrency.lockutils [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "52862152-12c7-4236-89c3-67750ecbed7a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.069 351492 DEBUG oslo_concurrency.lockutils [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.073 351492 DEBUG oslo_concurrency.lockutils [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.074 351492 DEBUG nova.compute.manager [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] No waiting events found dispatching network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.075 351492 WARNING nova.compute.manager [req-42e9ffd2-4cf4-4359-8126-ed11cf8b3295 req-909720d2-c4e9-4096-923a-3dc52300210d 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Received unexpected event network-vif-plugged-521d2181-8f17-4f4d-a3a6-98de1e17b734 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 02:06:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.156 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.291 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.313 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.314 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.316 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.316 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.502 351492 DEBUG nova.network.neutron [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updated VIF entry in instance network info cache for port 521d2181-8f17-4f4d-a3a6-98de1e17b734. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.503 351492 DEBUG nova.network.neutron [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [{"id": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "address": "fa:16:3e:8e:09:91", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521d2181-8f", "ovs_interfaceid": "521d2181-8f17-4f4d-a3a6-98de1e17b734", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.526 351492 DEBUG oslo_concurrency.lockutils [req-920f79b6-3dab-4716-be4f-8f035ae5d09e req-e7ddba86-3a6b-449a-8a38-c3a4d63d30aa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-52862152-12c7-4236-89c3-67750ecbed7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.655 351492 DEBUG nova.network.neutron [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.676 351492 INFO nova.compute.manager [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Took 1.30 seconds to deallocate network for instance.#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.731 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.732 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:13 compute-0 nova_compute[351485]: 2025-12-03 02:06:13.886 351492 DEBUG oslo_concurrency.processutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:06:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:06:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195037886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.432 351492 DEBUG oslo_concurrency.processutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.446 351492 DEBUG nova.compute.provider_tree [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.481 351492 DEBUG nova.scheduler.client.report [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.517 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.545 351492 INFO nova.scheduler.client.report [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 52862152-12c7-4236-89c3-67750ecbed7a#033[00m
Dec  3 02:06:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 239 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 341 B/s wr, 9 op/s
Dec  3 02:06:14 compute-0 nova_compute[351485]: 2025-12-03 02:06:14.660 351492 DEBUG oslo_concurrency.lockutils [None req-cf0a54e0-d25b-4d17-b7f6-f51bc4f4314e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "52862152-12c7-4236-89c3-67750ecbed7a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:15.015 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:06:15 compute-0 nova_compute[351485]: 2025-12-03 02:06:15.918 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:06:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:18 compute-0 nova_compute[351485]: 2025-12-03 02:06:18.160 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:06:20 compute-0 nova_compute[351485]: 2025-12-03 02:06:20.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:06:20 compute-0 nova_compute[351485]: 2025-12-03 02:06:20.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:06:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:06:20 compute-0 nova_compute[351485]: 2025-12-03 02:06:20.922 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:06:22 compute-0 podman[430736]: 2025-12-03 02:06:22.864707879 +0000 UTC m=+0.116022533 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  3 02:06:22 compute-0 podman[430737]: 2025-12-03 02:06:22.880746161 +0000 UTC m=+0.125263873 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 02:06:22 compute-0 podman[430738]: 2025-12-03 02:06:22.891984608 +0000 UTC m=+0.123569726 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:06:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:23 compute-0 nova_compute[351485]: 2025-12-03 02:06:23.163 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Dec  3 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.885 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727570.884012, 52862152-12c7-4236-89c3-67750ecbed7a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.886 351492 INFO nova.compute.manager [-] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.914 351492 DEBUG nova.compute.manager [None req-679c2fab-95b2-49d1-a0a1-a2b371db4d88 - - - - - -] [instance: 52862152-12c7-4236-89c3-67750ecbed7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:06:25 compute-0 nova_compute[351485]: 2025-12-03 02:06:25.926 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Dec  3 02:06:27 compute-0 systemd[1]: Starting dnf makecache...
Dec  3 02:06:27 compute-0 podman[430791]: 2025-12-03 02:06:27.906094269 +0000 UTC m=+0.149054344 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:06:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:28 compute-0 nova_compute[351485]: 2025-12-03 02:06:28.167 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:28 compute-0 dnf[430811]: Metadata cache refreshed recently.
Dec  3 02:06:28 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  3 02:06:28 compute-0 systemd[1]: Finished dnf makecache.
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:06:28
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'backups']
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:06:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:06:29 compute-0 podman[158098]: time="2025-12-03T02:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:06:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:06:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec  3 02:06:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:30 compute-0 nova_compute[351485]: 2025-12-03 02:06:30.929 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:06:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:06:31 compute-0 openstack_network_exporter[368278]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:06:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:06:31 compute-0 podman[430812]: 2025-12-03 02:06:31.893637851 +0000 UTC m=+0.146367958 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, distribution-scope=public, release-0.7.12=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible)
Dec  3 02:06:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:33 compute-0 nova_compute[351485]: 2025-12-03 02:06:33.170 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:33 compute-0 podman[430833]: 2025-12-03 02:06:33.877941816 +0000 UTC m=+0.101658228 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:06:33 compute-0 podman[430832]: 2025-12-03 02:06:33.891415796 +0000 UTC m=+0.125572582 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 02:06:33 compute-0 podman[430834]: 2025-12-03 02:06:33.909112335 +0000 UTC m=+0.138085355 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 02:06:33 compute-0 podman[430831]: 2025-12-03 02:06:33.940190651 +0000 UTC m=+0.178412162 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 02:06:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:35 compute-0 nova_compute[351485]: 2025-12-03 02:06:35.932 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:38 compute-0 nova_compute[351485]: 2025-12-03 02:06:38.174 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:06:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:40 compute-0 nova_compute[351485]: 2025-12-03 02:06:40.935 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:43 compute-0 nova_compute[351485]: 2025-12-03 02:06:43.177 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:45 compute-0 nova_compute[351485]: 2025-12-03 02:06:45.938 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:45 compute-0 ovn_controller[89134]: 2025-12-03T02:06:45Z|00053|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Dec  3 02:06:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:06:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4084479757' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:06:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:06:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4084479757' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:06:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:48 compute-0 nova_compute[351485]: 2025-12-03 02:06:48.180 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:06:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 67998f89-8d2c-4e87-b5b6-4d19691e5b49 does not exist
Dec  3 02:06:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 46ff17bb-7759-4d0a-991d-a8b9b4001883 does not exist
Dec  3 02:06:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b5ea6251-dd32-41c3-aecd-af67f98dced8 does not exist
Dec  3 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:06:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:06:50 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:06:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:06:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:50 compute-0 nova_compute[351485]: 2025-12-03 02:06:50.940 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.109715845 +0000 UTC m=+0.091353057 container create 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.066000042 +0000 UTC m=+0.047637314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:06:51 compute-0 systemd[1]: Started libpod-conmon-856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640.scope.
Dec  3 02:06:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.271896439 +0000 UTC m=+0.253533671 container init 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.29108655 +0000 UTC m=+0.272723742 container start 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.299137747 +0000 UTC m=+0.280775039 container attach 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:06:51 compute-0 heuristic_carson[431204]: 167 167
Dec  3 02:06:51 compute-0 systemd[1]: libpod-856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640.scope: Deactivated successfully.
Dec  3 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.306396741 +0000 UTC m=+0.288033953 container died 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ffcdb462fc309f4ee14db168fefe15555929477136c00bc36a5fe3fdfc2307b-merged.mount: Deactivated successfully.
Dec  3 02:06:51 compute-0 podman[431189]: 2025-12-03 02:06:51.37442149 +0000 UTC m=+0.356058682 container remove 856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:06:51 compute-0 systemd[1]: libpod-conmon-856e26c63885fd1674749bb15ca1a9ca21fee084b66e58d1e6f23954ca4d5640.scope: Deactivated successfully.
Dec  3 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.639663809 +0000 UTC m=+0.081565641 container create 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.607946335 +0000 UTC m=+0.049848207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:06:51 compute-0 systemd[1]: Started libpod-conmon-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope.
Dec  3 02:06:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.801351038 +0000 UTC m=+0.243252930 container init 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.818313327 +0000 UTC m=+0.260215159 container start 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:06:51 compute-0 podman[431228]: 2025-12-03 02:06:51.82587424 +0000 UTC m=+0.267776082 container attach 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:06:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:53 compute-0 relaxed_torvalds[431245]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:06:53 compute-0 relaxed_torvalds[431245]: --> relative data size: 1.0
Dec  3 02:06:53 compute-0 relaxed_torvalds[431245]: --> All data devices are unavailable
Dec  3 02:06:53 compute-0 systemd[1]: libpod-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope: Deactivated successfully.
Dec  3 02:06:53 compute-0 systemd[1]: libpod-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope: Consumed 1.205s CPU time.
Dec  3 02:06:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:53 compute-0 nova_compute[351485]: 2025-12-03 02:06:53.183 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:53 compute-0 podman[431275]: 2025-12-03 02:06:53.21001437 +0000 UTC m=+0.060940819 container died 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-557ec1c4761f940d859f57c539bb29e579e55bbc1cea8780872a1365d1fd1ddb-merged.mount: Deactivated successfully.
Dec  3 02:06:53 compute-0 podman[431275]: 2025-12-03 02:06:53.2826952 +0000 UTC m=+0.133621599 container remove 717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:06:53 compute-0 systemd[1]: libpod-conmon-717e6fb8c9c985c200ae826473e9b22292f5cfd938abf340c1537d0bebf2c08b.scope: Deactivated successfully.
Dec  3 02:06:53 compute-0 podman[431274]: 2025-12-03 02:06:53.300317577 +0000 UTC m=+0.155050384 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:06:53 compute-0 podman[431277]: 2025-12-03 02:06:53.30504675 +0000 UTC m=+0.137974492 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:06:53 compute-0 podman[431276]: 2025-12-03 02:06:53.315268618 +0000 UTC m=+0.157981046 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.525810553 +0000 UTC m=+0.100075113 container create bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.487948235 +0000 UTC m=+0.062212835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:06:54 compute-0 systemd[1]: Started libpod-conmon-bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705.scope.
Dec  3 02:06:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:06:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.685340782 +0000 UTC m=+0.259605372 container init bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.702829395 +0000 UTC m=+0.277093945 container start bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:06:54 compute-0 podman[431481]: 2025-12-03 02:06:54.708947968 +0000 UTC m=+0.283212578 container attach bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 02:06:54 compute-0 sweet_dhawan[431497]: 167 167
Dec  3 02:06:54 compute-0 systemd[1]: libpod-bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705.scope: Deactivated successfully.
Dec  3 02:06:54 compute-0 podman[431502]: 2025-12-03 02:06:54.814220986 +0000 UTC m=+0.071972700 container died bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-dff4facf74d5cbab27c1877fc6af51cd92218fe53c0856e17f800f3338024796-merged.mount: Deactivated successfully.
Dec  3 02:06:54 compute-0 podman[431502]: 2025-12-03 02:06:54.900617703 +0000 UTC m=+0.158369317 container remove bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dhawan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:06:54 compute-0 systemd[1]: libpod-conmon-bbf36ff6eb82eca0193d757cbcc05aee95097270e860e7118ea283da4f6b1705.scope: Deactivated successfully.
Dec  3 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.219052963 +0000 UTC m=+0.093543599 container create 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.175673829 +0000 UTC m=+0.050164495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:06:55 compute-0 systemd[1]: Started libpod-conmon-39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51.scope.
Dec  3 02:06:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.382067849 +0000 UTC m=+0.256558485 container init 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.402480675 +0000 UTC m=+0.276971311 container start 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 02:06:55 compute-0 podman[431522]: 2025-12-03 02:06:55.407798395 +0000 UTC m=+0.282289111 container attach 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:06:55 compute-0 nova_compute[351485]: 2025-12-03 02:06:55.942 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.215818) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616215869, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3476095, "memory_usage": 3528472, "flush_reason": "Manual Compaction"}
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616242232, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3410257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30133, "largest_seqno": 32188, "table_properties": {"data_size": 3400745, "index_size": 6070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18769, "raw_average_key_size": 20, "raw_value_size": 3381998, "raw_average_value_size": 3624, "num_data_blocks": 269, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727384, "oldest_key_time": 1764727384, "file_creation_time": 1764727616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 26482 microseconds, and 13577 cpu microseconds.
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.242296) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3410257 bytes OK
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.242317) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.244688) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.244704) EVENT_LOG_v1 {"time_micros": 1764727616244699, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.244722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3467476, prev total WAL file size 3467476, number of live WAL files 2.
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.246075) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3330KB)], [68(7259KB)]
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616246123, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10844140, "oldest_snapshot_seqno": -1}
Dec  3 02:06:56 compute-0 confident_bhabha[431536]: {
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:    "0": [
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:        {
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "devices": [
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "/dev/loop3"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            ],
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_name": "ceph_lv0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_size": "21470642176",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "name": "ceph_lv0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "tags": {
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cluster_name": "ceph",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.crush_device_class": "",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.encrypted": "0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osd_id": "0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.type": "block",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.vdo": "0"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            },
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "type": "block",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "vg_name": "ceph_vg0"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:        }
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:    ],
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:    "1": [
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:        {
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "devices": [
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "/dev/loop4"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            ],
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_name": "ceph_lv1",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_size": "21470642176",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "name": "ceph_lv1",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "tags": {
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cluster_name": "ceph",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.crush_device_class": "",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.encrypted": "0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osd_id": "1",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.type": "block",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.vdo": "0"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            },
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "type": "block",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "vg_name": "ceph_vg1"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:        }
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:    ],
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:    "2": [
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:        {
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "devices": [
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "/dev/loop5"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            ],
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_name": "ceph_lv2",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_size": "21470642176",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "name": "ceph_lv2",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "tags": {
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.cluster_name": "ceph",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.crush_device_class": "",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.encrypted": "0",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osd_id": "2",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.type": "block",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:                "ceph.vdo": "0"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            },
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "type": "block",
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:            "vg_name": "ceph_vg2"
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:        }
Dec  3 02:06:56 compute-0 confident_bhabha[431536]:    ]
Dec  3 02:06:56 compute-0 confident_bhabha[431536]: }
Dec  3 02:06:56 compute-0 systemd[1]: libpod-39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51.scope: Deactivated successfully.
Dec  3 02:06:56 compute-0 podman[431522]: 2025-12-03 02:06:56.29742069 +0000 UTC m=+1.171911356 container died 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5357 keys, 9083373 bytes, temperature: kUnknown
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616305211, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9083373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9046653, "index_size": 22210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 134300, "raw_average_key_size": 25, "raw_value_size": 8948848, "raw_average_value_size": 1670, "num_data_blocks": 917, "num_entries": 5357, "num_filter_entries": 5357, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.305507) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9083373 bytes
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.309190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.3 rd, 153.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5871, records dropped: 514 output_compression: NoCompression
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.309221) EVENT_LOG_v1 {"time_micros": 1764727616309206, "job": 38, "event": "compaction_finished", "compaction_time_micros": 59173, "compaction_time_cpu_micros": 27398, "output_level": 6, "num_output_files": 1, "total_output_size": 9083373, "num_input_records": 5871, "num_output_records": 5357, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616310621, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727616313656, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.245912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.313996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:06:56 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:06:56.314021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae191baa68e501ea046b986f50b082233e6fd3d2312548da46289b18d848b0b1-merged.mount: Deactivated successfully.
Dec  3 02:06:56 compute-0 podman[431522]: 2025-12-03 02:06:56.407245017 +0000 UTC m=+1.281735653 container remove 39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:06:56 compute-0 systemd[1]: libpod-conmon-39c94cb38e5d0eff8f691b2e18a8c300b9b6e7d4eec4a7cbfa4b63ed3b2d7b51.scope: Deactivated successfully.
Dec  3 02:06:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.584895196 +0000 UTC m=+0.099922909 container create 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.54889698 +0000 UTC m=+0.063924703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:06:57 compute-0 systemd[1]: Started libpod-conmon-01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247.scope.
Dec  3 02:06:57 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.712162664 +0000 UTC m=+0.227190447 container init 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.724356928 +0000 UTC m=+0.239384641 container start 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.731398307 +0000 UTC m=+0.246426080 container attach 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 02:06:57 compute-0 vibrant_snyder[431715]: 167 167
Dec  3 02:06:57 compute-0 systemd[1]: libpod-01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247.scope: Deactivated successfully.
Dec  3 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.735081061 +0000 UTC m=+0.250108744 container died 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-07b3c40a97966b8baad31afc8a22e026d9bcbfd703d892c17042be1ce586eb5c-merged.mount: Deactivated successfully.
Dec  3 02:06:57 compute-0 podman[431699]: 2025-12-03 02:06:57.810461066 +0000 UTC m=+0.325488779 container remove 01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:06:57 compute-0 systemd[1]: libpod-conmon-01123f5efef7572e421c431df14dc2ed41f4cc7473dd02cdd4fb334330932247.scope: Deactivated successfully.
Dec  3 02:06:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.15494597 +0000 UTC m=+0.095611647 container create 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 02:06:58 compute-0 nova_compute[351485]: 2025-12-03 02:06:58.187 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.124145842 +0000 UTC m=+0.064811519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:06:58 compute-0 systemd[1]: Started libpod-conmon-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope.
Dec  3 02:06:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.303429147 +0000 UTC m=+0.244094804 container init 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.332777345 +0000 UTC m=+0.273442972 container start 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:06:58 compute-0 podman[431738]: 2025-12-03 02:06:58.338508526 +0000 UTC m=+0.279174163 container attach 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 02:06:58 compute-0 podman[431752]: 2025-12-03 02:06:58.350202366 +0000 UTC m=+0.126841867 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:06:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:06:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]: {
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "osd_id": 2,
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "type": "bluestore"
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:    },
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "osd_id": 1,
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "type": "bluestore"
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:    },
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "osd_id": 0,
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:        "type": "bluestore"
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]:    }
Dec  3 02:06:59 compute-0 compassionate_einstein[431760]: }
Dec  3 02:06:59 compute-0 systemd[1]: libpod-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope: Deactivated successfully.
Dec  3 02:06:59 compute-0 podman[431738]: 2025-12-03 02:06:59.585114958 +0000 UTC m=+1.525780635 container died 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:06:59 compute-0 systemd[1]: libpod-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope: Consumed 1.239s CPU time.
Dec  3 02:06:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:59.636 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:06:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:59.637 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:06:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:06:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d82a5e376d278573416e4234ad4aa4c2d39c73068de1e2c4fe645067124d5e6-merged.mount: Deactivated successfully.
Dec  3 02:06:59 compute-0 podman[431738]: 2025-12-03 02:06:59.693489584 +0000 UTC m=+1.634155231 container remove 9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:06:59 compute-0 systemd[1]: libpod-conmon-9cd1da86e0612bf2d8137aa0d0e736e91cc1acc5fb762e57d22b5182473bf7f3.scope: Deactivated successfully.
Dec  3 02:06:59 compute-0 podman[158098]: time="2025-12-03T02:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:06:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:06:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:06:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:06:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:06:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8646 "" "Go-http-client/1.1"
Dec  3 02:06:59 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:06:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b7c441cb-0e37-4a51-b041-c3b2064a8238 does not exist
Dec  3 02:06:59 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3a1c9c70-4c1f-44ba-82f0-6e2e1e83b2c9 does not exist
Dec  3 02:07:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:07:00 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:07:00 compute-0 nova_compute[351485]: 2025-12-03 02:07:00.947 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:07:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:07:01 compute-0 openstack_network_exporter[368278]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:07:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:07:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:02 compute-0 podman[431868]: 2025-12-03 02:07:02.904982953 +0000 UTC m=+0.148745065 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, container_name=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 02:07:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:03 compute-0 nova_compute[351485]: 2025-12-03 02:07:03.190 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:04 compute-0 podman[431888]: 2025-12-03 02:07:04.866986858 +0000 UTC m=+0.110177718 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:07:04 compute-0 podman[431887]: 2025-12-03 02:07:04.878247385 +0000 UTC m=+0.125761337 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec  3 02:07:04 compute-0 podman[431889]: 2025-12-03 02:07:04.882205967 +0000 UTC m=+0.115117427 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  3 02:07:04 compute-0 podman[431886]: 2025-12-03 02:07:04.911645327 +0000 UTC m=+0.164957613 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 02:07:05 compute-0 nova_compute[351485]: 2025-12-03 02:07:05.949 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:07:07 compute-0 nova_compute[351485]: 2025-12-03 02:07:07.614 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:07:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.194 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.613 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.614 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.719 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.720 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.721 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.722 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:07:08 compute-0 nova_compute[351485]: 2025-12-03 02:07:08.723 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:07:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:07:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730002295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.234 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.362 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.363 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.364 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.376 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.377 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.379 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.388 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.389 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:09 compute-0 nova_compute[351485]: 2025-12-03 02:07:09.389 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.019 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.020 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3406MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.021 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.021 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.271 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.271 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.272 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.272 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.273 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.497 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:07:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:10 compute-0 nova_compute[351485]: 2025-12-03 02:07:10.952 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:07:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242033160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.054 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.067 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.089 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.092 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:07:11 compute-0 nova_compute[351485]: 2025-12-03 02:07:11.092 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:07:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.057 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.057 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.058 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:07:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.196 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.712 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.713 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:07:13 compute-0 nova_compute[351485]: 2025-12-03 02:07:13.714 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:07:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.348 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.371 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.372 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.373 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.373 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.374 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:15 compute-0 nova_compute[351485]: 2025-12-03 02:07:15.956 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:18 compute-0 nova_compute[351485]: 2025-12-03 02:07:18.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.508 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.510 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.522 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'name': 'vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.531 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.538 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.539 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:07:19.540115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.585 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.624 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.667 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:07:19.669910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.678 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.686 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.693 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.695 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.696 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.696 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:07:19.695248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.698 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.699 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:07:19.699142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.700 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.700 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.703 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.703 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.704 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:07:19.702604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:07:19.705983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.706 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.707 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.707 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:07:19.709393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.744 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.745 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.745 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.783 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.784 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.785 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.829 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.830 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.830 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:07:19.833236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.938 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.940 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:19.940 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.039 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.040 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.041 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.147 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.148 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.149 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.152 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:07:20.153615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.154 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.155 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.156 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.157 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 1828594840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.158 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 317962452 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.158 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.latency volume: 234609421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.158 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.159 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.159 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.159 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.160 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.160 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:07:20.157440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:07:20.161926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.164 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.165 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.165 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.166 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.167 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:07:20.166643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.169 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.170 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.170 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.170 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.171 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.171 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.171 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.172 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.174 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.174 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.174 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.175 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.175 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.175 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.176 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:07:20.169353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 5579657720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:07:20.173503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.178 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 23420930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.179 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.179 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:07:20.177789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.180 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.180 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.181 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.181 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.181 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.183 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:07:20.183657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.184 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.184 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.185 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.185 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.186 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.186 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.187 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.189 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.189 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.189 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:07:20.188706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.190 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/cpu volume: 40100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.191 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 41150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.192 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 44670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:07:20.191460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.194 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:07:20.196952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.202 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.203 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:07:20.201740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.204 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:07:20.206409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.207 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.207 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.208 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.208 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.209 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.209 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.210 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.210 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.211 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.212 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.213 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.214 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.214 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:07:20.213522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.219 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.219 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.219 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:07:20.217256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:07:20.220402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.221 14 DEBUG ceilometer.compute.pollsters [-] 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.221 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.222 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.224 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.228 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.229 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.230 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.231 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:07:20.232 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:07:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:20 compute-0 nova_compute[351485]: 2025-12-03 02:07:20.959 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:21 compute-0 nova_compute[351485]: 2025-12-03 02:07:21.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:21 compute-0 nova_compute[351485]: 2025-12-03 02:07:21.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:07:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:23 compute-0 nova_compute[351485]: 2025-12-03 02:07:23.203 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:23 compute-0 podman[432020]: 2025-12-03 02:07:23.881265164 +0000 UTC m=+0.109629423 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:07:23 compute-0 podman[432018]: 2025-12-03 02:07:23.88573614 +0000 UTC m=+0.123716070 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  3 02:07:23 compute-0 podman[432019]: 2025-12-03 02:07:23.891971696 +0000 UTC m=+0.126925951 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec  3 02:07:24 compute-0 nova_compute[351485]: 2025-12-03 02:07:24.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:24 compute-0 nova_compute[351485]: 2025-12-03 02:07:24.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:07:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:25 compute-0 nova_compute[351485]: 2025-12-03 02:07:25.960 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:28 compute-0 nova_compute[351485]: 2025-12-03 02:07:28.207 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:07:28
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'images', 'default.rgw.log']
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:28 compute-0 podman[432075]: 2025-12-03 02:07:28.886917904 +0000 UTC m=+0.131212521 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:07:28 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:07:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:07:29 compute-0 podman[158098]: time="2025-12-03T02:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:07:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:07:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8651 "" "Go-http-client/1.1"
Dec  3 02:07:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:30 compute-0 nova_compute[351485]: 2025-12-03 02:07:30.963 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:07:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:07:31 compute-0 openstack_network_exporter[368278]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:07:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:07:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:33 compute-0 nova_compute[351485]: 2025-12-03 02:07:33.211 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:33 compute-0 podman[432094]: 2025-12-03 02:07:33.880267866 +0000 UTC m=+0.138581539 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:07:34 compute-0 nova_compute[351485]: 2025-12-03 02:07:34.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:07:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:35 compute-0 podman[432116]: 2025-12-03 02:07:35.886791555 +0000 UTC m=+0.116670721 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  3 02:07:35 compute-0 podman[432114]: 2025-12-03 02:07:35.89016095 +0000 UTC m=+0.131191410 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter)
Dec  3 02:07:35 compute-0 podman[432115]: 2025-12-03 02:07:35.89192488 +0000 UTC m=+0.130010017 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:07:35 compute-0 podman[432113]: 2025-12-03 02:07:35.919078116 +0000 UTC m=+0.170106748 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:07:35 compute-0 nova_compute[351485]: 2025-12-03 02:07:35.967 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:38 compute-0 nova_compute[351485]: 2025-12-03 02:07:38.213 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:07:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:07:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7234 writes, 32K keys, 7234 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7234 writes, 7234 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1298 writes, 5905 keys, 1298 commit groups, 1.0 writes per commit group, ingest: 8.55 MB, 0.01 MB/s#012Interval WAL: 1298 writes, 1298 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     97.4      0.41              0.18        19    0.021       0      0       0.0       0.0#012  L6      1/0    8.66 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.3    134.1    108.4      1.21              0.59        18    0.067     86K    10K       0.0       0.0#012 Sum      1/0    8.66 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3    100.3    105.6      1.61              0.78        37    0.044     86K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    111.6    115.9      0.35              0.17         8    0.044     22K   2526       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    134.1    108.4      1.21              0.59        18    0.067     86K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.0      0.40              0.18        18    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.039, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 1.6 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 19.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000195 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1288,19.15 MB,6.2164%) FilterBlock(38,247.67 KB,0.0785283%) IndexBlock(38,445.08 KB,0.141119%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 02:07:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:40 compute-0 nova_compute[351485]: 2025-12-03 02:07:40.970 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:43 compute-0 nova_compute[351485]: 2025-12-03 02:07:43.216 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:45 compute-0 nova_compute[351485]: 2025-12-03 02:07:45.973 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:07:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261362780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:07:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:07:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261362780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:07:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:48 compute-0 nova_compute[351485]: 2025-12-03 02:07:48.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:50 compute-0 nova_compute[351485]: 2025-12-03 02:07:50.977 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:53 compute-0 nova_compute[351485]: 2025-12-03 02:07:53.223 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:54 compute-0 podman[432197]: 2025-12-03 02:07:54.894227705 +0000 UTC m=+0.143374344 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:07:54 compute-0 podman[432198]: 2025-12-03 02:07:54.897918709 +0000 UTC m=+0.141548563 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec  3 02:07:54 compute-0 podman[432199]: 2025-12-03 02:07:54.917042478 +0000 UTC m=+0.155919898 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:07:55 compute-0 nova_compute[351485]: 2025-12-03 02:07:55.980 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:07:58 compute-0 nova_compute[351485]: 2025-12-03 02:07:58.226 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:07:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:07:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:07:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:07:59.636 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:07:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:07:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:07:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:07:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:07:59 compute-0 podman[158098]: time="2025-12-03T02:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:07:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:07:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec  3 02:07:59 compute-0 podman[432254]: 2025-12-03 02:07:59.928395362 +0000 UTC m=+0.173577535 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Dec  3 02:08:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:00 compute-0 nova_compute[351485]: 2025-12-03 02:08:00.982 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:08:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:08:01 compute-0 openstack_network_exporter[368278]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:08:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:08:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8f4d7105-bcb1-4588-8105-f87fba16754e does not exist
Dec  3 02:08:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e54821e9-04b5-4ccc-a42a-383fe18036d9 does not exist
Dec  3 02:08:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bb4f502c-2112-4f00-951b-f6821f11259a does not exist
Dec  3 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:08:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:08:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:08:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:08:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.787834043 +0000 UTC m=+0.097396557 container create 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.747035123 +0000 UTC m=+0.056597677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:08:02 compute-0 systemd[1]: Started libpod-conmon-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope.
Dec  3 02:08:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.953204917 +0000 UTC m=+0.262767481 container init 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.973102388 +0000 UTC m=+0.282664912 container start 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.979998962 +0000 UTC m=+0.289561506 container attach 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 02:08:02 compute-0 reverent_hypatia[432561]: 167 167
Dec  3 02:08:02 compute-0 systemd[1]: libpod-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope: Deactivated successfully.
Dec  3 02:08:02 compute-0 conmon[432561]: conmon 68cd753ce3e388c9949e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope/container/memory.events
Dec  3 02:08:02 compute-0 podman[432545]: 2025-12-03 02:08:02.99589013 +0000 UTC m=+0.305452654 container died 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c8747fcb8029bd433fd38e6ae6a3a1867f54ddbca00c6bf6c5f3ed4638135e-merged.mount: Deactivated successfully.
Dec  3 02:08:03 compute-0 podman[432545]: 2025-12-03 02:08:03.084829128 +0000 UTC m=+0.394391652 container remove 68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hypatia, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 02:08:03 compute-0 systemd[1]: libpod-conmon-68cd753ce3e388c9949e997c910140a6880ff3463b0d6d766053416f65e62573.scope: Deactivated successfully.
Dec  3 02:08:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:03 compute-0 nova_compute[351485]: 2025-12-03 02:08:03.230 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.36646766 +0000 UTC m=+0.073641658 container create 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:08:03 compute-0 systemd[1]: Started libpod-conmon-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope.
Dec  3 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.342481923 +0000 UTC m=+0.049655951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:08:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.517017555 +0000 UTC m=+0.224191583 container init 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.544906751 +0000 UTC m=+0.252080759 container start 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:08:03 compute-0 podman[432583]: 2025-12-03 02:08:03.549665965 +0000 UTC m=+0.256839963 container attach 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:08:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:04 compute-0 fervent_heyrovsky[432599]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:08:04 compute-0 fervent_heyrovsky[432599]: --> relative data size: 1.0
Dec  3 02:08:04 compute-0 fervent_heyrovsky[432599]: --> All data devices are unavailable
Dec  3 02:08:04 compute-0 systemd[1]: libpod-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope: Deactivated successfully.
Dec  3 02:08:04 compute-0 systemd[1]: libpod-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope: Consumed 1.194s CPU time.
Dec  3 02:08:04 compute-0 podman[432583]: 2025-12-03 02:08:04.797227424 +0000 UTC m=+1.504401452 container died 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e421848a9c2fd13abe47ce207c6470bb52d5a9ce43daa01888e082cbcfc60c1-merged.mount: Deactivated successfully.
Dec  3 02:08:04 compute-0 podman[432583]: 2025-12-03 02:08:04.894317311 +0000 UTC m=+1.601491309 container remove 7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 02:08:04 compute-0 podman[432627]: 2025-12-03 02:08:04.895626268 +0000 UTC m=+0.138748103 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec  3 02:08:04 compute-0 systemd[1]: libpod-conmon-7af804fd7ef375efe571e5d9e090e9a7a3a8b472c2b4de97ae56426b5db747a2.scope: Deactivated successfully.
Dec  3 02:08:05 compute-0 nova_compute[351485]: 2025-12-03 02:08:05.987 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.023140223 +0000 UTC m=+0.074537573 container create 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:05.99394296 +0000 UTC m=+0.045340400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:08:06 compute-0 systemd[1]: Started libpod-conmon-2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb.scope.
Dec  3 02:08:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.158576222 +0000 UTC m=+0.209973612 container init 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.170631172 +0000 UTC m=+0.222028532 container start 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.176030524 +0000 UTC m=+0.227427914 container attach 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:08:06 compute-0 fervent_benz[432834]: 167 167
Dec  3 02:08:06 compute-0 systemd[1]: libpod-2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb.scope: Deactivated successfully.
Dec  3 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.181053366 +0000 UTC m=+0.232450716 container died 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 02:08:06 compute-0 podman[432815]: 2025-12-03 02:08:06.204822536 +0000 UTC m=+0.101174274 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:08:06 compute-0 podman[432816]: 2025-12-03 02:08:06.20672866 +0000 UTC m=+0.103652934 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 02:08:06 compute-0 podman[432814]: 2025-12-03 02:08:06.207145882 +0000 UTC m=+0.116694172 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec  3 02:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-eefd0cee5a8071eb80287af9f8704412dfeacf567f8550a827aa2e49b84b7547-merged.mount: Deactivated successfully.
Dec  3 02:08:06 compute-0 podman[432797]: 2025-12-03 02:08:06.232642401 +0000 UTC m=+0.284039751 container remove 2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_benz, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:08:06 compute-0 systemd[1]: libpod-conmon-2045fc4eafae41e231fb0f2b9827f7aab31dc1a57d64592e0a48c736ebc302fb.scope: Deactivated successfully.
Dec  3 02:08:06 compute-0 podman[432811]: 2025-12-03 02:08:06.271582729 +0000 UTC m=+0.172772173 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  3 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.463192422 +0000 UTC m=+0.080132621 container create 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.42942789 +0000 UTC m=+0.046368149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:08:06 compute-0 systemd[1]: Started libpod-conmon-185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46.scope.
Dec  3 02:08:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.677776363 +0000 UTC m=+0.294716622 container init 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.697135959 +0000 UTC m=+0.314076168 container start 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 02:08:06 compute-0 podman[432921]: 2025-12-03 02:08:06.703743985 +0000 UTC m=+0.320684624 container attach 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:08:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]: {
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:    "0": [
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:        {
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "devices": [
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "/dev/loop3"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            ],
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_name": "ceph_lv0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_size": "21470642176",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "name": "ceph_lv0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "tags": {
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cluster_name": "ceph",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.crush_device_class": "",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.encrypted": "0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osd_id": "0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.type": "block",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.vdo": "0"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            },
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "type": "block",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "vg_name": "ceph_vg0"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:        }
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:    ],
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:    "1": [
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:        {
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "devices": [
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "/dev/loop4"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            ],
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_name": "ceph_lv1",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_size": "21470642176",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "name": "ceph_lv1",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "tags": {
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cluster_name": "ceph",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.crush_device_class": "",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.encrypted": "0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osd_id": "1",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.type": "block",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.vdo": "0"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            },
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "type": "block",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "vg_name": "ceph_vg1"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:        }
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:    ],
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:    "2": [
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:        {
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "devices": [
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "/dev/loop5"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            ],
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_name": "ceph_lv2",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_size": "21470642176",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "name": "ceph_lv2",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "tags": {
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.cluster_name": "ceph",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.crush_device_class": "",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.encrypted": "0",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osd_id": "2",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.type": "block",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:                "ceph.vdo": "0"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            },
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "type": "block",
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:            "vg_name": "ceph_vg2"
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:        }
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]:    ]
Dec  3 02:08:07 compute-0 flamboyant_sanderson[432937]: }
Dec  3 02:08:07 compute-0 systemd[1]: libpod-185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46.scope: Deactivated successfully.
Dec  3 02:08:07 compute-0 podman[432921]: 2025-12-03 02:08:07.575794665 +0000 UTC m=+1.192734874 container died 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:08:07 compute-0 nova_compute[351485]: 2025-12-03 02:08:07.593 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-281e4d0aabfa5b0960ce054dba7ea9ee17f81b9b7146776a369a1500b4cc7b20-merged.mount: Deactivated successfully.
Dec  3 02:08:07 compute-0 podman[432921]: 2025-12-03 02:08:07.683209814 +0000 UTC m=+1.300149993 container remove 185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec  3 02:08:07 compute-0 systemd[1]: libpod-conmon-185717487c6e451025fa758aebb57bdc67cbf490def02e085dd433cc157f1c46.scope: Deactivated successfully.
Dec  3 02:08:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:08 compute-0 nova_compute[351485]: 2025-12-03 02:08:08.233 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.809305019 +0000 UTC m=+0.091431010 container create 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.779485288 +0000 UTC m=+0.061611349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:08:08 compute-0 systemd[1]: Started libpod-conmon-4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d.scope.
Dec  3 02:08:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.965308238 +0000 UTC m=+0.247434299 container init 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.983742988 +0000 UTC m=+0.265869009 container start 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.991177557 +0000 UTC m=+0.273303578 container attach 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:08:08 compute-0 hardcore_sutherland[433115]: 167 167
Dec  3 02:08:08 compute-0 systemd[1]: libpod-4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d.scope: Deactivated successfully.
Dec  3 02:08:08 compute-0 podman[433099]: 2025-12-03 02:08:08.994001557 +0000 UTC m=+0.276127578 container died 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6390ee4cc7bcc629f234d94e3d564c85db8dbe2cf230ecc9016e99e3de82477-merged.mount: Deactivated successfully.
Dec  3 02:08:09 compute-0 podman[433099]: 2025-12-03 02:08:09.075125574 +0000 UTC m=+0.357251595 container remove 4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:08:09 compute-0 systemd[1]: libpod-conmon-4c2e34a55849620d8c55ce8932c9ce7aabb09cdb95e19bd07e5b3334580f199d.scope: Deactivated successfully.
Dec  3 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.375396112 +0000 UTC m=+0.104677723 container create 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.338646475 +0000 UTC m=+0.067928146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:08:09 compute-0 systemd[1]: Started libpod-conmon-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope.
Dec  3 02:08:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.552481505 +0000 UTC m=+0.281763176 container init 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.576278966 +0000 UTC m=+0.305560577 container start 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:08:09 compute-0 nova_compute[351485]: 2025-12-03 02:08:09.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:09 compute-0 podman[433144]: 2025-12-03 02:08:09.585378353 +0000 UTC m=+0.314659974 container attach 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.604 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.608 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.609 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.699 351492 DEBUG nova.compute.manager [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.700 351492 DEBUG nova.compute.manager [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing instance network info cache due to event network-changed-d0c565d0-5299-45e5-84ac-ea722711af3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.701 351492 DEBUG oslo_concurrency.lockutils [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.701 351492 DEBUG oslo_concurrency.lockutils [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.702 351492 DEBUG nova.network.neutron [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Refreshing network info cache for port d0c565d0-5299-45e5-84ac-ea722711af3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:08:10 compute-0 angry_hamilton[433160]: {
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "osd_id": 2,
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "type": "bluestore"
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:    },
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "osd_id": 1,
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "type": "bluestore"
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:    },
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "osd_id": 0,
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:        "type": "bluestore"
Dec  3 02:08:10 compute-0 angry_hamilton[433160]:    }
Dec  3 02:08:10 compute-0 angry_hamilton[433160]: }
Dec  3 02:08:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:10 compute-0 systemd[1]: libpod-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope: Deactivated successfully.
Dec  3 02:08:10 compute-0 podman[433144]: 2025-12-03 02:08:10.745428875 +0000 UTC m=+1.474710466 container died 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:08:10 compute-0 systemd[1]: libpod-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope: Consumed 1.158s CPU time.
Dec  3 02:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3294c236045b24f5b497f70773ac2a3b62b49499d9c13212b9641f75801195e1-merged.mount: Deactivated successfully.
Dec  3 02:08:10 compute-0 podman[433144]: 2025-12-03 02:08:10.809646956 +0000 UTC m=+1.538928537 container remove 8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:08:10 compute-0 systemd[1]: libpod-conmon-8bb616fd3295b6272c2c7fcab50f53b4c81e2a3710538b849dedebf66d40ff74.scope: Deactivated successfully.
Dec  3 02:08:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:08:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:08:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:08:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:08:10 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b0e70a03-a465-40c2-89fd-6bf1153c1f83 does not exist
Dec  3 02:08:10 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2c88b18b-af7c-4355-93a1-952be2c5b09b does not exist
Dec  3 02:08:10 compute-0 nova_compute[351485]: 2025-12-03 02:08:10.990 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:08:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1546365977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.136 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.250 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.251 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.251 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.265 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.610 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.612 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.611 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.683 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.684 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.685 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.685 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.686 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.687 351492 INFO nova.compute.manager [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Terminating instance#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.689 351492 DEBUG nova.compute.manager [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:08:11 compute-0 kernel: tapd0c565d0-52 (unregistering): left promiscuous mode
Dec  3 02:08:11 compute-0 NetworkManager[48912]: <info>  [1764727691.8375] device (tapd0c565d0-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.853 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:11 compute-0 ovn_controller[89134]: 2025-12-03T02:08:11Z|00054|binding|INFO|Releasing lport d0c565d0-5299-45e5-84ac-ea722711af3d from this chassis (sb_readonly=0)
Dec  3 02:08:11 compute-0 ovn_controller[89134]: 2025-12-03T02:08:11Z|00055|binding|INFO|Setting lport d0c565d0-5299-45e5-84ac-ea722711af3d down in Southbound
Dec  3 02:08:11 compute-0 ovn_controller[89134]: 2025-12-03T02:08:11Z|00056|binding|INFO|Removing iface tapd0c565d0-52 ovn-installed in OVS
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.863 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.870 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:1b:b0 192.168.0.227'], port_security=['fa:16:3e:de:1b:b0 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-kaobzdetwujj-uf5345mx272a-port-25woqro3y5s6', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d0c565d0-5299-45e5-84ac-ea722711af3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.872 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d0c565d0-5299-45e5-84ac-ea722711af3d in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.873 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.887 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:08:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.902 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[8381feba-1b9b-45bb-971b-5a0f0359cf82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:08:11 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  3 02:08:11 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 49.316s CPU time.
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.940 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:08:11 compute-0 systemd-machined[138558]: Machine qemu-3-instance-00000003 terminated.
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.942 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3414MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.943 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:11 compute-0 nova_compute[351485]: 2025-12-03 02:08:11.943 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.946 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[77871d90-6490-43ee-9677-ce257ebb429b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.949 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d57232-28e9-44a0-a48e-6ed0ca551fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:08:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:11.988 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[21cd0ced-a546-461f-9e9c-c9eb542e2a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.016 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[44890c6b-41f8-470c-bf19-84ce0e195a2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 15808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 433288, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.047 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[526a61f2-eb70-4bfb-a09c-5377fc62b26c]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433289, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433289, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.050 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.053 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.056 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.057 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.062 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.063 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.064 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.065 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:08:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:12.066 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.131 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.142 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.156 351492 INFO nova.virt.libvirt.driver [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Instance destroyed successfully.#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.156 351492 DEBUG nova.objects.instance [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.173 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.205 351492 DEBUG nova.virt.libvirt.vif [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-kaobzdetwujj-uf5345mx272a-vnf-xg4pxtj76f4j',id=3,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:00:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-7757xffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:00:26Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  3 02:08:12 compute-0 nova_compute[351485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwNjY4MzMxMzI4OTAwMzkzNz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDY2ODMzMTMyODkwMDM5Mzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA2NjgzMzEzMjg5MDAzOTM3PT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.206 351492 DEBUG nova.network.os_vif_util [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.207 351492 DEBUG nova.network.os_vif_util [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.208 351492 DEBUG os_vif [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.210 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.211 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0c565d0-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.216 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.228 351492 INFO os_vif [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:1b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d0c565d0-5299-45e5-84ac-ea722711af3d,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd0c565d0-52')#033[00m
Dec  3 02:08:12 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:08:12.205 351492 DEBUG nova.virt.libvirt.vif [None req-ff165601-34 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.618 351492 DEBUG nova.network.neutron [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updated VIF entry in instance network info cache for port d0c565d0-5299-45e5-84ac-ea722711af3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.618 351492 DEBUG nova.network.neutron [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [{"id": "d0c565d0-5299-45e5-84ac-ea722711af3d", "address": "fa:16:3e:de:1b:b0", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0c565d0-52", "ovs_interfaceid": "d0c565d0-5299-45e5-84ac-ea722711af3d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.644 351492 DEBUG oslo_concurrency.lockutils [req-4dce454e-5a29-46aa-9451-ed4c8cd34dc4 req-b8700b45-3502-4344-b507-971fdce50b38 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:08:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:08:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345447596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:08:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.717 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.732 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.757 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.807 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.807 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.827 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-unplugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.828 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.829 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.829 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.829 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] No waiting events found dispatching network-vif-unplugged-d0c565d0-5299-45e5-84ac-ea722711af3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.830 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-unplugged-d0c565d0-5299-45e5-84ac-ea722711af3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.830 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.830 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 DEBUG oslo_concurrency.lockutils [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 DEBUG nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] No waiting events found dispatching network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:08:12 compute-0 nova_compute[351485]: 2025-12-03 02:08:12.831 351492 WARNING nova.compute.manager [req-36361c4e-a4ad-4acb-ba21-ccc1ba6a1a20 req-bb94d516-7945-451f-a5f5-07ad39451f03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Received unexpected event network-vif-plugged-d0c565d0-5299-45e5-84ac-ea722711af3d for instance with vm_state active and task_state deleting.#033[00m
Dec  3 02:08:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.236 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.607 351492 INFO nova.virt.libvirt.driver [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deleting instance files /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_del#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.608 351492 INFO nova.virt.libvirt.driver [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deletion of /var/lib/nova/instances/55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274_del complete#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.673 351492 INFO nova.compute.manager [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 1.98 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.674 351492 DEBUG oslo.service.loopingcall [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.674 351492 DEBUG nova.compute.manager [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.675 351492 DEBUG nova.network.neutron [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.807 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.808 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.842 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.842 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.843 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:08:13 compute-0 nova_compute[351485]: 2025-12-03 02:08:13.872 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec  3 02:08:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 190 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 341 B/s wr, 12 op/s
Dec  3 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.782 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.784 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.784 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:08:14 compute-0 nova_compute[351485]: 2025-12-03 02:08:14.785 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:08:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:15.615 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.335 351492 DEBUG nova.network.neutron [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.363 351492 INFO nova.compute.manager [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Took 2.69 seconds to deallocate network for instance.#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.432 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.433 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.540 351492 DEBUG oslo_concurrency.processutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:08:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.817 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.837 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.838 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.839 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.839 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:16 compute-0 nova_compute[351485]: 2025-12-03 02:08:16.840 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:08:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596664746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.066 351492 DEBUG oslo_concurrency.processutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.079 351492 DEBUG nova.compute.provider_tree [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.094 351492 DEBUG nova.scheduler.client.report [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.121 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.172 351492 INFO nova.scheduler.client.report [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274#033[00m
Dec  3 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.214 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:17 compute-0 nova_compute[351485]: 2025-12-03 02:08:17.296 351492 DEBUG oslo_concurrency.lockutils [None req-ff165601-3427-461c-b2b6-662be00680dc 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:18 compute-0 nova_compute[351485]: 2025-12-03 02:08:18.238 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:08:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:08:22 compute-0 nova_compute[351485]: 2025-12-03 02:08:22.217 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:22 compute-0 nova_compute[351485]: 2025-12-03 02:08:22.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:08:22 compute-0 nova_compute[351485]: 2025-12-03 02:08:22.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:08:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:08:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:23 compute-0 nova_compute[351485]: 2025-12-03 02:08:23.242 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:08:25 compute-0 podman[433364]: 2025-12-03 02:08:25.849073201 +0000 UTC m=+0.107292237 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:08:25 compute-0 podman[433366]: 2025-12-03 02:08:25.873063567 +0000 UTC m=+0.108390487 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:08:25 compute-0 podman[433365]: 2025-12-03 02:08:25.884833319 +0000 UTC m=+0.124398929 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  3 02:08:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 27 op/s
Dec  3 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.146 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727692.1441972, 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.147 351492 INFO nova.compute.manager [-] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.183 351492 DEBUG nova.compute.manager [None req-5de1a6e4-6226-4461-baec-cb32b77869b3 - - - - - -] [instance: 55bfde08-3b4f-4d0c-bb8f-0ad5f84ad274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:08:27 compute-0 nova_compute[351485]: 2025-12-03 02:08:27.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:28 compute-0 nova_compute[351485]: 2025-12-03 02:08:28.244 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:08:28
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', '.mgr', 'images', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:08:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:08:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:08:29 compute-0 podman[158098]: time="2025-12-03T02:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:08:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:08:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec  3 02:08:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:30 compute-0 podman[433426]: 2025-12-03 02:08:30.848751103 +0000 UTC m=+0.095968907 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:08:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:08:31 compute-0 openstack_network_exporter[368278]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:08:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:08:32 compute-0 nova_compute[351485]: 2025-12-03 02:08:32.223 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:33 compute-0 nova_compute[351485]: 2025-12-03 02:08:33.249 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:35 compute-0 podman[433444]: 2025-12-03 02:08:35.890439281 +0000 UTC m=+0.131408717 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_id=edpm, release-0.7.12=, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 02:08:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:36 compute-0 podman[433465]: 2025-12-03 02:08:36.869062005 +0000 UTC m=+0.097599033 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:08:36 compute-0 podman[433466]: 2025-12-03 02:08:36.909081094 +0000 UTC m=+0.136479880 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:08:36 compute-0 podman[433464]: 2025-12-03 02:08:36.914470166 +0000 UTC m=+0.148641393 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64)
Dec  3 02:08:36 compute-0 podman[433463]: 2025-12-03 02:08:36.953028953 +0000 UTC m=+0.192423187 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:08:37 compute-0 nova_compute[351485]: 2025-12-03 02:08:37.227 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:38 compute-0 nova_compute[351485]: 2025-12-03 02:08:38.253 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:08:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:42 compute-0 nova_compute[351485]: 2025-12-03 02:08:42.230 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:43 compute-0 nova_compute[351485]: 2025-12-03 02:08:43.255 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:45 compute-0 systemd-logind[800]: New session 61 of user zuul.
Dec  3 02:08:45 compute-0 systemd[1]: Started Session 61 of User zuul.
Dec  3 02:08:46 compute-0 python3[433722]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 02:08:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:08:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411792179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:08:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:08:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411792179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:08:47 compute-0 nova_compute[351485]: 2025-12-03 02:08:47.234 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:48 compute-0 nova_compute[351485]: 2025-12-03 02:08:48.258 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:49 compute-0 ovn_controller[89134]: 2025-12-03T02:08:49Z|00057|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  3 02:08:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:52 compute-0 nova_compute[351485]: 2025-12-03 02:08:52.237 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:08:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:53 compute-0 nova_compute[351485]: 2025-12-03 02:08:53.261 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec  3 02:08:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  3 02:08:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  3 02:08:55 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  3 02:08:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec  3 02:08:56 compute-0 podman[433762]: 2025-12-03 02:08:56.877909349 +0000 UTC m=+0.120134068 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 02:08:56 compute-0 podman[433763]: 2025-12-03 02:08:56.883428245 +0000 UTC m=+0.119769158 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:08:56 compute-0 podman[433764]: 2025-12-03 02:08:56.902964756 +0000 UTC m=+0.134258557 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:08:57 compute-0 nova_compute[351485]: 2025-12-03 02:08:57.240 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.156936) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738157011, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 258, "total_data_size": 1833767, "memory_usage": 1867984, "flush_reason": "Manual Compaction"}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738170396, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1806063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32189, "largest_seqno": 33390, "table_properties": {"data_size": 1800211, "index_size": 3183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12037, "raw_average_key_size": 19, "raw_value_size": 1788496, "raw_average_value_size": 2884, "num_data_blocks": 143, "num_entries": 620, "num_filter_entries": 620, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727617, "oldest_key_time": 1764727617, "file_creation_time": 1764727738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 13565 microseconds, and 6555 cpu microseconds.
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.170496) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1806063 bytes OK
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.170794) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.173671) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.173697) EVENT_LOG_v1 {"time_micros": 1764727738173689, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.173721) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1828275, prev total WAL file size 1828275, number of live WAL files 2.
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.175412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1763KB)], [71(8870KB)]
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738175481, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10889436, "oldest_snapshot_seqno": -1}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5445 keys, 10785328 bytes, temperature: kUnknown
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738244286, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10785328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10745621, "index_size": 25005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137017, "raw_average_key_size": 25, "raw_value_size": 10643834, "raw_average_value_size": 1954, "num_data_blocks": 1036, "num_entries": 5445, "num_filter_entries": 5445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.244522) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10785328 bytes
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.246871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.1 rd, 156.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 5977, records dropped: 532 output_compression: NoCompression
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.246910) EVENT_LOG_v1 {"time_micros": 1764727738246895, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68871, "compaction_time_cpu_micros": 31532, "output_level": 6, "num_output_files": 1, "total_output_size": 10785328, "num_input_records": 5977, "num_output_records": 5445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738247325, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727738249222, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.175151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:08:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:08:58.249346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:08:58 compute-0 nova_compute[351485]: 2025-12-03 02:08:58.264 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:08:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:08:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 15 op/s
Dec  3 02:08:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:08:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:59.639 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:08:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:08:59.640 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:08:59 compute-0 podman[158098]: time="2025-12-03T02:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:08:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:08:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec  3 02:09:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  3 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:09:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:09:01 compute-0 openstack_network_exporter[368278]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:09:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:09:01 compute-0 podman[433822]: 2025-12-03 02:09:01.880297461 +0000 UTC m=+0.130023678 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.774 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "3d670990-5a2a-4334-b8b1-9ae49d171323" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.775 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.804 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.918 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.920 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.934 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:09:02 compute-0 nova_compute[351485]: 2025-12-03 02:09:02.935 351492 INFO nova.compute.claims [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.112 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.267 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:09:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071522504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.657 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.666 351492 DEBUG nova.compute.provider_tree [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.688 351492 DEBUG nova.scheduler.client.report [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.705 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.706 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.766 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.786 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.832 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.950 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.953 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:09:03 compute-0 nova_compute[351485]: 2025-12-03 02:09:03.954 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Creating image(s)#033[00m
Dec  3 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.019 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.097 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.163 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.176 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "c29aeb8fc873eee85b0369901388993e8201c8d4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.179 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "c29aeb8fc873eee85b0369901388993e8201c8d4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:04 compute-0 nova_compute[351485]: 2025-12-03 02:09:04.427 351492 DEBUG nova.virt.libvirt.imagebackend [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/774b7995-1f03-43de-ad4e-feac9d5f9136/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/774b7995-1f03-43de-ad4e-feac9d5f9136/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 02:09:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.469 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.566 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.568 351492 DEBUG nova.virt.images [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] 774b7995-1f03-43de-ad4e-feac9d5f9136 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.570 351492 DEBUG nova.privsep.utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.571 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.803 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.part /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.812 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.912 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4.converted --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.914 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "c29aeb8fc873eee85b0369901388993e8201c8d4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.949 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:05 compute-0 nova_compute[351485]: 2025-12-03 02:09:05.959 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4 3d670990-5a2a-4334-b8b1-9ae49d171323_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.417 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4 3d670990-5a2a-4334-b8b1-9ae49d171323_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.589 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] resizing rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:09:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 776 KiB/s rd, 1.4 MiB/s wr, 22 op/s
Dec  3 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.874 351492 DEBUG nova.objects.instance [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d670990-5a2a-4334-b8b1-9ae49d171323 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:09:06 compute-0 podman[434020]: 2025-12-03 02:09:06.91315856 +0000 UTC m=+0.156435883 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, io.buildah.version=1.29.0, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, release=1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:09:06 compute-0 nova_compute[351485]: 2025-12-03 02:09:06.956 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.027 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:07 compute-0 podman[434057]: 2025-12-03 02:09:07.037718082 +0000 UTC m=+0.107050370 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.041 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.099 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.100 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.101 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:07 compute-0 podman[434081]: 2025-12-03 02:09:07.1029061 +0000 UTC m=+0.126792536 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.103 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:07 compute-0 podman[434078]: 2025-12-03 02:09:07.108943211 +0000 UTC m=+0.134171425 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.144 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.154 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:07 compute-0 podman[434137]: 2025-12-03 02:09:07.205922565 +0000 UTC m=+0.140682038 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.246 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.609 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.862 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.864 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Ensure instance console log exists: /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.866 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.867 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.868 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.872 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T02:08:51Z,direct_url=<?>,disk_format='qcow2',id=774b7995-1f03-43de-ad4e-feac9d5f9136,min_disk=0,min_ram=0,name='fvt_testing_image',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T02:08:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '774b7995-1f03-43de-ad4e-feac9d5f9136'}], 'ephemerals': [{'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.880 351492 WARNING nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.888 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.889 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.895 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.896 351492 DEBUG nova.virt.libvirt.host [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.897 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.898 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8fb4324d-1fde-4886-9d66-fedd66b56d0f',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-03T02:08:51Z,direct_url=<?>,disk_format='qcow2',id=774b7995-1f03-43de-ad4e-feac9d5f9136,min_disk=0,min_ram=0,name='fvt_testing_image',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-03T02:08:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.899 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.899 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.900 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.900 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.901 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.901 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.902 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.902 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.903 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.904 351492 DEBUG nova.virt.hardware [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:09:07 compute-0 nova_compute[351485]: 2025-12-03 02:09:07.909 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.169005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748169055, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 328, "num_deletes": 252, "total_data_size": 154994, "memory_usage": 161936, "flush_reason": "Manual Compaction"}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748173783, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 153366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33391, "largest_seqno": 33718, "table_properties": {"data_size": 151244, "index_size": 286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5780, "raw_average_key_size": 20, "raw_value_size": 147121, "raw_average_value_size": 514, "num_data_blocks": 13, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727739, "oldest_key_time": 1764727739, "file_creation_time": 1764727748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 4872 microseconds, and 2077 cpu microseconds.
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.173880) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 153366 bytes OK
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.173899) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176162) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176181) EVENT_LOG_v1 {"time_micros": 1764727748176174, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 152710, prev total WAL file size 152710, number of live WAL files 2.
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.177044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(149KB)], [74(10MB)]
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748177068, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10938694, "oldest_snapshot_seqno": -1}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5220 keys, 7624703 bytes, temperature: kUnknown
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748257515, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7624703, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7591318, "index_size": 19259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 132532, "raw_average_key_size": 25, "raw_value_size": 7498265, "raw_average_value_size": 1436, "num_data_blocks": 795, "num_entries": 5220, "num_filter_entries": 5220, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.257850) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7624703 bytes
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.259903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.7 rd, 94.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(121.0) write-amplify(49.7) OK, records in: 5731, records dropped: 511 output_compression: NoCompression
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.259922) EVENT_LOG_v1 {"time_micros": 1764727748259913, "job": 42, "event": "compaction_finished", "compaction_time_micros": 80619, "compaction_time_cpu_micros": 19893, "output_level": 6, "num_output_files": 1, "total_output_size": 7624703, "num_input_records": 5731, "num_output_records": 5220, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748260071, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727748261680, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.176872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:09:08 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:09:08.262043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.270 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:09:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376091058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.384 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.385 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 644 KiB/s wr, 9 op/s
Dec  3 02:09:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:09:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253709311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.892 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.945 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:08 compute-0 nova_compute[351485]: 2025-12-03 02:09:08.956 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:09:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367925342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.491 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.494 351492 DEBUG nova.objects.instance [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d670990-5a2a-4334-b8b1-9ae49d171323 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.521 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <uuid>3d670990-5a2a-4334-b8b1-9ae49d171323</uuid>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <name>instance-00000005</name>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <memory>524288</memory>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <nova:name>fvt_testing_server</nova:name>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:09:07</nova:creationTime>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <nova:flavor name="fvt_testing_flavor">
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <nova:memory>512</nova:memory>
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <nova:ephemeral>1</nova:ephemeral>
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <nova:user uuid="03ba25e4009b43f7b0054fee32bf9136">admin</nova:user>
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <nova:project uuid="9746b242761a48048d185ce26d622b33">admin</nova:project>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="774b7995-1f03-43de-ad4e-feac9d5f9136"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <nova:ports/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <system>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <entry name="serial">3d670990-5a2a-4334-b8b1-9ae49d171323</entry>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <entry name="uuid">3d670990-5a2a-4334-b8b1-9ae49d171323</entry>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </system>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <os>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  </os>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <features>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  </features>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/3d670990-5a2a-4334-b8b1-9ae49d171323_disk">
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </source>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/3d670990-5a2a-4334-b8b1-9ae49d171323_disk.eph0">
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </source>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <target dev="vdb" bus="virtio"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config">
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </source>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:09:09 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/console.log" append="off"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <video>
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </video>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:09:09 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:09:09 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:09:09 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:09:09 compute-0 nova_compute[351485]: </domain>
Dec  3 02:09:09 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.583 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.584 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.584 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.585 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Using config drive#033[00m
Dec  3 02:09:09 compute-0 nova_compute[351485]: 2025-12-03 02:09:09.637 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.540 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Creating config drive at /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.546 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1a35ikk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.623 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.624 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.624 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.694 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1a35ikk" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 178 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 24 op/s
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.759 351492 DEBUG nova.storage.rbd_utils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] rbd image 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:09:10 compute-0 nova_compute[351485]: 2025-12-03 02:09:10.772 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.069 351492 DEBUG oslo_concurrency.processutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config 3d670990-5a2a-4334-b8b1-9ae49d171323_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.071 351492 INFO nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deleting local config drive /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323/disk.config because it was imported into RBD.#033[00m
Dec  3 02:09:11 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 02:09:11 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 02:09:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:09:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143152588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:09:11 compute-0 virtqemud[154511]: End of file while reading data: Input/output error
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.209 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:11 compute-0 systemd-machined[138558]: New machine qemu-5-instance-00000005.
Dec  3 02:09:11 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.774 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.774 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.774 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.779 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.779 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.780 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.784 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.785 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:11 compute-0 nova_compute[351485]: 2025-12-03 02:09:11.785 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.252 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.375 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.376 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3568MB free_disk=59.92203140258789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.376 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.377 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:09:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4f6313d0-d7b2-48b2-b9e2-846781eece59 does not exist
Dec  3 02:09:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f258548e-1ce4-4f0d-adea-b9d7bc27a84d does not exist
Dec  3 02:09:12 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f4a62abe-be38-4a56-881a-b691a75f95e6 does not exist
Dec  3 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:09:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:09:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 3d670990-5a2a-4334-b8b1-9ae49d171323 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.458 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.459 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.530 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 45 op/s
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.760 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.760 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.764 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727752.763187, 3d670990-5a2a-4334-b8b1-9ae49d171323 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.764 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.770 351492 INFO nova.virt.libvirt.driver [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance spawned successfully.#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.771 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.820 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.835 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.840 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.841 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.841 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.842 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.843 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.843 351492 DEBUG nova.virt.libvirt.driver [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.871 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.871 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764727752.7636049, 3d670990-5a2a-4334-b8b1-9ae49d171323 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.872 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] VM Started (Lifecycle Event)#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.908 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.916 351492 INFO nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 8.96 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.916 351492 DEBUG nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.919 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.948 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:09:12 compute-0 nova_compute[351485]: 2025-12-03 02:09:12.989 351492 INFO nova.compute.manager [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 10.12 seconds to build instance.#033[00m
Dec  3 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.011 351492 DEBUG oslo_concurrency.lockutils [None req-d31d5d8e-48d8-4027-99f2-8690eb163212 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:09:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522871387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.059 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.082 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.104 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.141 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.142 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:09:13 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:09:13 compute-0 nova_compute[351485]: 2025-12-03 02:09:13.272 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.499100723 +0000 UTC m=+0.079471912 container create 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.463821498 +0000 UTC m=+0.044192727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:09:13 compute-0 systemd[1]: Started libpod-conmon-2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6.scope.
Dec  3 02:09:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.667502361 +0000 UTC m=+0.247873540 container init 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.697904728 +0000 UTC m=+0.278275917 container start 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.704196565 +0000 UTC m=+0.284567754 container attach 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:09:13 compute-0 inspiring_cannon[434835]: 167 167
Dec  3 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.713798376 +0000 UTC m=+0.294169565 container died 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:09:13 compute-0 systemd[1]: libpod-2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6.scope: Deactivated successfully.
Dec  3 02:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f223cc956d0ceda28b4319e85016a20ef366d1c46a6a7f252b2358370c4055-merged.mount: Deactivated successfully.
Dec  3 02:09:13 compute-0 podman[434819]: 2025-12-03 02:09:13.791463986 +0000 UTC m=+0.371835175 container remove 2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 02:09:13 compute-0 systemd[1]: libpod-conmon-2a3be12832ab13e2e72cbc121d60bceffbc349ea53824016d91dddcf5dc5fed6.scope: Deactivated successfully.
Dec  3 02:09:14 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 02:09:14 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.143639947 +0000 UTC m=+0.119667445 container create 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.088729319 +0000 UTC m=+0.064756877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:09:14 compute-0 systemd[1]: Started libpod-conmon-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope.
Dec  3 02:09:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.325059422 +0000 UTC m=+0.301086920 container init 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.337227885 +0000 UTC m=+0.313255383 container start 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:09:14 compute-0 podman[434858]: 2025-12-03 02:09:14.342466733 +0000 UTC m=+0.318494311 container attach 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 02:09:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 50 op/s
Dec  3 02:09:15 compute-0 sad_elbakyan[434894]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:09:15 compute-0 sad_elbakyan[434894]: --> relative data size: 1.0
Dec  3 02:09:15 compute-0 sad_elbakyan[434894]: --> All data devices are unavailable
Dec  3 02:09:15 compute-0 systemd[1]: libpod-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope: Deactivated successfully.
Dec  3 02:09:15 compute-0 systemd[1]: libpod-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope: Consumed 1.283s CPU time.
Dec  3 02:09:15 compute-0 podman[434858]: 2025-12-03 02:09:15.724764121 +0000 UTC m=+1.700791649 container died 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 02:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7a8c65e21c6a4a56c22126f531e49e9c05b27240c719ad8f24c72f2916dbfd-merged.mount: Deactivated successfully.
Dec  3 02:09:15 compute-0 podman[434858]: 2025-12-03 02:09:15.813911865 +0000 UTC m=+1.789939353 container remove 1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:09:15 compute-0 systemd[1]: libpod-conmon-1a630f41b63e4f6e64c3d6af8286766d5b9bba62e616603f9d4dff0aac614ebc.scope: Deactivated successfully.
Dec  3 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.141 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.142 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.143 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.411 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.412 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:09:16 compute-0 nova_compute[351485]: 2025-12-03 02:09:16.412 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:09:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Dec  3 02:09:16 compute-0 podman[435075]: 2025-12-03 02:09:16.949227309 +0000 UTC m=+0.086717166 container create 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:16.917889866 +0000 UTC m=+0.055379803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:09:17 compute-0 systemd[1]: Started libpod-conmon-6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571.scope.
Dec  3 02:09:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.117669999 +0000 UTC m=+0.255159896 container init 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.137329324 +0000 UTC m=+0.274819181 container start 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.142327815 +0000 UTC m=+0.279817702 container attach 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 02:09:17 compute-0 quizzical_brattain[435091]: 167 167
Dec  3 02:09:17 compute-0 systemd[1]: libpod-6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571.scope: Deactivated successfully.
Dec  3 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.152021498 +0000 UTC m=+0.289511395 container died 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c49600c6262a42418154c434d0319c405d050300631ca24df9f2010f33de6477-merged.mount: Deactivated successfully.
Dec  3 02:09:17 compute-0 podman[435075]: 2025-12-03 02:09:17.224956615 +0000 UTC m=+0.362446462 container remove 6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:09:17 compute-0 systemd[1]: libpod-conmon-6d9bc0ad830e35a9702dd45c2115d59ed1a9d4fbf5e5d87630021832aa0fc571.scope: Deactivated successfully.
Dec  3 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.255 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.592273643 +0000 UTC m=+0.131326585 container create 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.549956169 +0000 UTC m=+0.089009181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:09:17 compute-0 systemd[1]: Started libpod-conmon-60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e.scope.
Dec  3 02:09:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.770775966 +0000 UTC m=+0.309828968 container init 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.789606437 +0000 UTC m=+0.328659349 container start 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 02:09:17 compute-0 podman[435114]: 2025-12-03 02:09:17.794973448 +0000 UTC m=+0.334026450 container attach 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.794 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.813 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.813 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.814 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:17 compute-0 nova_compute[351485]: 2025-12-03 02:09:17.814 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:18 compute-0 nova_compute[351485]: 2025-12-03 02:09:18.275 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:18 compute-0 nova_compute[351485]: 2025-12-03 02:09:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:18 compute-0 condescending_banach[435131]: {
Dec  3 02:09:18 compute-0 condescending_banach[435131]:    "0": [
Dec  3 02:09:18 compute-0 condescending_banach[435131]:        {
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "devices": [
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "/dev/loop3"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            ],
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_name": "ceph_lv0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_size": "21470642176",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "name": "ceph_lv0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "tags": {
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cluster_name": "ceph",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.crush_device_class": "",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.encrypted": "0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osd_id": "0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.type": "block",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.vdo": "0"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            },
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "type": "block",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "vg_name": "ceph_vg0"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:        }
Dec  3 02:09:18 compute-0 condescending_banach[435131]:    ],
Dec  3 02:09:18 compute-0 condescending_banach[435131]:    "1": [
Dec  3 02:09:18 compute-0 condescending_banach[435131]:        {
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "devices": [
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "/dev/loop4"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            ],
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_name": "ceph_lv1",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_size": "21470642176",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "name": "ceph_lv1",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "tags": {
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cluster_name": "ceph",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.crush_device_class": "",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.encrypted": "0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osd_id": "1",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.type": "block",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.vdo": "0"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            },
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "type": "block",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "vg_name": "ceph_vg1"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:        }
Dec  3 02:09:18 compute-0 condescending_banach[435131]:    ],
Dec  3 02:09:18 compute-0 condescending_banach[435131]:    "2": [
Dec  3 02:09:18 compute-0 condescending_banach[435131]:        {
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "devices": [
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "/dev/loop5"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            ],
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_name": "ceph_lv2",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_size": "21470642176",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "name": "ceph_lv2",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "tags": {
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.cluster_name": "ceph",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.crush_device_class": "",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.encrypted": "0",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osd_id": "2",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.type": "block",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:                "ceph.vdo": "0"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            },
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "type": "block",
Dec  3 02:09:18 compute-0 condescending_banach[435131]:            "vg_name": "ceph_vg2"
Dec  3 02:09:18 compute-0 condescending_banach[435131]:        }
Dec  3 02:09:18 compute-0 condescending_banach[435131]:    ]
Dec  3 02:09:18 compute-0 condescending_banach[435131]: }
Dec  3 02:09:18 compute-0 systemd[1]: libpod-60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e.scope: Deactivated successfully.
Dec  3 02:09:18 compute-0 podman[435114]: 2025-12-03 02:09:18.647921351 +0000 UTC m=+1.186974263 container died 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-98b09adf70d5a6c497996accff05b0896c47deccf0921b33a56d6df4116006ff-merged.mount: Deactivated successfully.
Dec  3 02:09:18 compute-0 podman[435114]: 2025-12-03 02:09:18.729841591 +0000 UTC m=+1.268894493 container remove 60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:09:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 62 op/s
Dec  3 02:09:18 compute-0 systemd[1]: libpod-conmon-60e3812e2554da68b1f3d1080d3df92ae42b2701062609e6d5ebea4d102d861e.scope: Deactivated successfully.
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.508 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.510 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.523 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.527 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3d670990-5a2a-4334-b8b1-9ae49d171323 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.529 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3d670990-5a2a-4334-b8b1-9ae49d171323 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.899 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Wed, 03 Dec 2025 02:09:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-eba20fba-b015-491f-8f3f-7b8091be8a4a x-openstack-request-id: req-eba20fba-b015-491f-8f3f-7b8091be8a4a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.900 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3d670990-5a2a-4334-b8b1-9ae49d171323", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "9746b242761a48048d185ce26d622b33", "user_id": "03ba25e4009b43f7b0054fee32bf9136", "metadata": {}, "hostId": "875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd", "image": {"id": "774b7995-1f03-43de-ad4e-feac9d5f9136", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/774b7995-1f03-43de-ad4e-feac9d5f9136"}]}, "flavor": {"id": "8fb4324d-1fde-4886-9d66-fedd66b56d0f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8fb4324d-1fde-4886-9d66-fedd66b56d0f"}]}, "created": "2025-12-03T02:09:01Z", "updated": "2025-12-03T02:09:12Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3d670990-5a2a-4334-b8b1-9ae49d171323"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3d670990-5a2a-4334-b8b1-9ae49d171323"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:09:12.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.900 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3d670990-5a2a-4334-b8b1-9ae49d171323 used request id req-eba20fba-b015-491f-8f3f-7b8091be8a4a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.902 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3d670990-5a2a-4334-b8b1-9ae49d171323', 'name': 'fvt_testing_server', 'flavor': {'id': '8fb4324d-1fde-4886-9d66-fedd66b56d0f', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '774b7995-1f03-43de-ad4e-feac9d5f9136'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.906 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:09:19.907331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:19.958 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:19 compute-0 podman[435289]: 2025-12-03 02:09:19.997427144 +0000 UTC m=+0.068991437 container create b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.008 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.008 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 3d670990-5a2a-4334-b8b1-9ae49d171323: ceilometer.compute.pollsters.NoVolumeException
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.055 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:09:20.061322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:19.973848109 +0000 UTC m=+0.045412422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.071 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 systemd[1]: Started libpod-conmon-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope.
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.081 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.083 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:09:20.083368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.084 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.086 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.086 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:09:20.085862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.087 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:09:20.087611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:09:20.089331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:09:20.091086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.118 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.119 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.120 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.138 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.139 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.139 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:20.147858696 +0000 UTC m=+0.219423019 container init b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:20.158067694 +0000 UTC m=+0.229631977 container start b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:09:20 compute-0 podman[435289]: 2025-12-03 02:09:20.164836094 +0000 UTC m=+0.236400457 container attach b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.172 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 funny_turing[435304]: 167 167
Dec  3 02:09:20 compute-0 systemd[1]: libpod-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope: Deactivated successfully.
Dec  3 02:09:20 compute-0 conmon[435304]: conmon b8bcd24a9198705b0f1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope/container/memory.events
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.176 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.177 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.179 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.180 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:09:20.179503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:09:20.183126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 podman[435309]: 2025-12-03 02:09:20.217922531 +0000 UTC m=+0.028834164 container died b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.253 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.254 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.255 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5637fa5fb1b972241f4e134c019fef6ff38b8ba592c2d6590e599272f7b6195a-merged.mount: Deactivated successfully.
Dec  3 02:09:20 compute-0 podman[435309]: 2025-12-03 02:09:20.272653525 +0000 UTC m=+0.083565148 container remove b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:09:20 compute-0 systemd[1]: libpod-conmon-b8bcd24a9198705b0f1b0457ce1dd6654921a83b8bfe8fa0f7622e4cd0ef6b84.scope: Deactivated successfully.
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.318 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.319 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.319 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.370 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.370 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.371 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.373 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.374 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:09:20.374018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.375 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.376 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.377 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:09:20.376455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.378 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.378 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.latency volume: 1568219530 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.379 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.379 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.latency volume: 10891607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.380 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.380 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.380 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:09:20.382730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.383 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.383 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.384 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.384 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.385 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.385 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.385 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.386 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.386 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.389 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.389 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:09:20.389029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.390 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.390 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:09:20.391969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.392 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.393 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.393 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.394 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.394 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.395 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.395 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.395 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:09:20.397596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.398 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.399 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.400 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.401 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.401 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:09:20.400829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.402 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.403 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:09:20.404246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.405 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.406 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.406 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.406 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.407 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:09:20.407598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 43250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/cpu volume: 6840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.409 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 46700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.412 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:09:20.409090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:09:20.410968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:09:20.412400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 3d670990-5a2a-4334-b8b1-9ae49d171323/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:09:20.413973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.415 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.417 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.417 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.421 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:09:20.416998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:09:20.418570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:09:20.419957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:09:20.421279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.423 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:09:20.424 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.565967046 +0000 UTC m=+0.093306912 container create f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.534086477 +0000 UTC m=+0.061426373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:09:20 compute-0 systemd[1]: Started libpod-conmon-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope.
Dec  3 02:09:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:09:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Dec  3 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.772948232 +0000 UTC m=+0.300288148 container init f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.795277932 +0000 UTC m=+0.322617828 container start f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:09:20 compute-0 podman[435329]: 2025-12-03 02:09:20.802488395 +0000 UTC m=+0.329828341 container attach f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]: {
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "osd_id": 2,
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "type": "bluestore"
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:    },
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "osd_id": 1,
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "type": "bluestore"
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:    },
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "osd_id": 0,
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:        "type": "bluestore"
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]:    }
Dec  3 02:09:21 compute-0 heuristic_lederberg[435345]: }
Dec  3 02:09:21 compute-0 podman[435329]: 2025-12-03 02:09:21.972660013 +0000 UTC m=+1.499999869 container died f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:09:21 compute-0 systemd[1]: libpod-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope: Deactivated successfully.
Dec  3 02:09:21 compute-0 systemd[1]: libpod-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope: Consumed 1.162s CPU time.
Dec  3 02:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c50c82d572c922f018d7956017eef9fe44a79712112cd3d29757494143f9f45-merged.mount: Deactivated successfully.
Dec  3 02:09:22 compute-0 podman[435329]: 2025-12-03 02:09:22.068884126 +0000 UTC m=+1.596223992 container remove f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 02:09:22 compute-0 systemd[1]: libpod-conmon-f2e62eef5e6d1dbc759bcfdad0f53bb0583b268eb7aee5bb0bc6f93696178e71.scope: Deactivated successfully.
Dec  3 02:09:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:09:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:09:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:09:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:09:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d55f476f-1861-49a3-b4f5-cfac63a65b33 does not exist
Dec  3 02:09:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 49f1e839-03f1-4229-8623-7edbed5adc39 does not exist
Dec  3 02:09:22 compute-0 nova_compute[351485]: 2025-12-03 02:09:22.258 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:09:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:09:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 465 KiB/s wr, 82 op/s
Dec  3 02:09:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:23 compute-0 nova_compute[351485]: 2025-12-03 02:09:23.279 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:23 compute-0 nova_compute[351485]: 2025-12-03 02:09:23.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:09:23 compute-0 nova_compute[351485]: 2025-12-03 02:09:23.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:09:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 59 op/s
Dec  3 02:09:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 53 op/s
Dec  3 02:09:27 compute-0 nova_compute[351485]: 2025-12-03 02:09:27.262 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:27 compute-0 podman[435440]: 2025-12-03 02:09:27.868758374 +0000 UTC m=+0.110306102 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 02:09:27 compute-0 podman[435441]: 2025-12-03 02:09:27.868324851 +0000 UTC m=+0.115883218 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:09:27 compute-0 podman[435439]: 2025-12-03 02:09:27.876926204 +0000 UTC m=+0.127969770 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:09:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:28 compute-0 nova_compute[351485]: 2025-12-03 02:09:28.289 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:09:28
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images']
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:09:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 34 op/s
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:09:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:09:29 compute-0 podman[158098]: time="2025-12-03T02:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:09:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:09:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 02:09:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 34 op/s
Dec  3 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:09:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:09:31 compute-0 openstack_network_exporter[368278]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:09:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.268 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.407 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "3d670990-5a2a-4334-b8b1-9ae49d171323" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.411 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.412 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "3d670990-5a2a-4334-b8b1-9ae49d171323-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.413 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.414 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.418 351492 INFO nova.compute.manager [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Terminating instance#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.420 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "refresh_cache-3d670990-5a2a-4334-b8b1-9ae49d171323" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.421 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquired lock "refresh_cache-3d670990-5a2a-4334-b8b1-9ae49d171323" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.422 351492 DEBUG nova.network.neutron [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:09:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 1 op/s
Dec  3 02:09:32 compute-0 nova_compute[351485]: 2025-12-03 02:09:32.816 351492 DEBUG nova.network.neutron [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:09:32 compute-0 podman[435501]: 2025-12-03 02:09:32.886067095 +0000 UTC m=+0.130752928 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:09:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.191 351492 DEBUG nova.network.neutron [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.209 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Releasing lock "refresh_cache-3d670990-5a2a-4334-b8b1-9ae49d171323" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.211 351492 DEBUG nova.compute.manager [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.292 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:33 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  3 02:09:33 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 22.170s CPU time.
Dec  3 02:09:33 compute-0 systemd-machined[138558]: Machine qemu-5-instance-00000005 terminated.
Dec  3 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.448 351492 INFO nova.virt.libvirt.driver [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance destroyed successfully.#033[00m
Dec  3 02:09:33 compute-0 nova_compute[351485]: 2025-12-03 02:09:33.449 351492 DEBUG nova.objects.instance [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 3d670990-5a2a-4334-b8b1-9ae49d171323 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:09:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 173 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 3 op/s
Dec  3 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.800 351492 INFO nova.virt.libvirt.driver [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deleting instance files /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323_del#033[00m
Dec  3 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.802 351492 INFO nova.virt.libvirt.driver [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deletion of /var/lib/nova/instances/3d670990-5a2a-4334-b8b1-9ae49d171323_del complete#033[00m
Dec  3 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.873 351492 INFO nova.compute.manager [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 1.66 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.874 351492 DEBUG oslo.service.loopingcall [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.875 351492 DEBUG nova.compute.manager [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:09:34 compute-0 nova_compute[351485]: 2025-12-03 02:09:34.876 351492 DEBUG nova.network.neutron [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.794 351492 DEBUG nova.network.neutron [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.807 351492 DEBUG nova.network.neutron [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.822 351492 INFO nova.compute.manager [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Took 0.95 seconds to deallocate network for instance.#033[00m
Dec  3 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.897 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:35 compute-0 nova_compute[351485]: 2025-12-03 02:09:35.898 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.035 351492 DEBUG oslo_concurrency.processutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:09:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:09:36 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619494202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.549 351492 DEBUG oslo_concurrency.processutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.566 351492 DEBUG nova.compute.provider_tree [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.607 351492 DEBUG nova.scheduler.client.report [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.640 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.669 351492 INFO nova.scheduler.client.report [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 3d670990-5a2a-4334-b8b1-9ae49d171323#033[00m
Dec  3 02:09:36 compute-0 nova_compute[351485]: 2025-12-03 02:09:36.754 351492 DEBUG oslo_concurrency.lockutils [None req-1abb2e1a-97a3-4ebc-a58d-c6c5fd1d0ab0 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "3d670990-5a2a-4334-b8b1-9ae49d171323" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 157 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 32 op/s
Dec  3 02:09:37 compute-0 nova_compute[351485]: 2025-12-03 02:09:37.271 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:37 compute-0 podman[435569]: 2025-12-03 02:09:37.871180278 +0000 UTC m=+0.100729462 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:09:37 compute-0 podman[435570]: 2025-12-03 02:09:37.902069489 +0000 UTC m=+0.125812879 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, distribution-scope=public, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec  3 02:09:37 compute-0 podman[435576]: 2025-12-03 02:09:37.906883294 +0000 UTC m=+0.128221186 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  3 02:09:37 compute-0 podman[435567]: 2025-12-03 02:09:37.906981407 +0000 UTC m=+0.156514324 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  3 02:09:37 compute-0 podman[435568]: 2025-12-03 02:09:37.913876012 +0000 UTC m=+0.152887873 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:09:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  3 02:09:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  3 02:09:38 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  3 02:09:38 compute-0 nova_compute[351485]: 2025-12-03 02:09:38.297 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011045070041349222 of space, bias 1.0, pg target 0.33135210124047665 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:09:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 157 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 39 op/s
Dec  3 02:09:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 147 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.9 KiB/s wr, 53 op/s
Dec  3 02:09:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:09:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7458 writes, 29K keys, 7458 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7458 writes, 1633 syncs, 4.57 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 915 writes, 2916 keys, 915 commit groups, 1.0 writes per commit group, ingest: 2.84 MB, 0.00 MB/s#012Interval WAL: 915 writes, 383 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:09:42 compute-0 nova_compute[351485]: 2025-12-03 02:09:42.275 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Dec  3 02:09:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:43 compute-0 nova_compute[351485]: 2025-12-03 02:09:43.301 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 67 op/s
Dec  3 02:09:46 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Dec  3 02:09:46 compute-0 systemd[1]: session-61.scope: Consumed 1.369s CPU time.
Dec  3 02:09:46 compute-0 systemd-logind[800]: Session 61 logged out. Waiting for processes to exit.
Dec  3 02:09:46 compute-0 systemd-logind[800]: Removed session 61.
Dec  3 02:09:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec  3 02:09:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:09:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192582471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:09:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:09:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192582471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:09:47 compute-0 nova_compute[351485]: 2025-12-03 02:09:47.278 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  3 02:09:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  3 02:09:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  3 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.445 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727773.4440382, 3d670990-5a2a-4334-b8b1-9ae49d171323 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.446 351492 INFO nova.compute.manager [-] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:09:48 compute-0 nova_compute[351485]: 2025-12-03 02:09:48.475 351492 DEBUG nova.compute.manager [None req-a06ff9b7-ff3e-4a68-b855-8c39c26a77d2 - - - - - -] [instance: 3d670990-5a2a-4334-b8b1-9ae49d171323] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:09:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:09:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8945 writes, 34K keys, 8945 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8945 writes, 2107 syncs, 4.25 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1124 writes, 3389 keys, 1124 commit groups, 1.0 writes per commit group, ingest: 2.51 MB, 0.00 MB/s#012Interval WAL: 1124 writes, 495 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:09:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec  3 02:09:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 818 B/s wr, 18 op/s
Dec  3 02:09:52 compute-0 nova_compute[351485]: 2025-12-03 02:09:52.282 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 204 B/s wr, 1 op/s
Dec  3 02:09:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:53 compute-0 nova_compute[351485]: 2025-12-03 02:09:53.307 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  3 02:09:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  3 02:09:54 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  3 02:09:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 147 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 966 KiB/s wr, 12 op/s
Dec  3 02:09:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:09:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7002 writes, 28K keys, 7002 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7002 writes, 1484 syncs, 4.72 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 615 writes, 1901 keys, 615 commit groups, 1.0 writes per commit group, ingest: 1.36 MB, 0.00 MB/s#012Interval WAL: 615 writes, 283 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:09:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Dec  3 02:09:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 02:09:57 compute-0 nova_compute[351485]: 2025-12-03 02:09:57.286 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:09:58 compute-0 nova_compute[351485]: 2025-12-03 02:09:58.311 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:09:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:09:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 155 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec  3 02:09:58 compute-0 podman[435674]: 2025-12-03 02:09:58.884914896 +0000 UTC m=+0.118683657 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 02:09:58 compute-0 podman[435675]: 2025-12-03 02:09:58.890686559 +0000 UTC m=+0.116566758 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:09:58 compute-0 podman[435673]: 2025-12-03 02:09:58.90206365 +0000 UTC m=+0.141440410 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:09:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:09:59.638 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:09:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:09:59.639 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:09:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:09:59.640 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:09:59 compute-0 podman[158098]: time="2025-12-03T02:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:09:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:09:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec  3 02:10:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Dec  3 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:10:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:10:01 compute-0 openstack_network_exporter[368278]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:10:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:10:02 compute-0 nova_compute[351485]: 2025-12-03 02:10:02.291 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  3 02:10:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  3 02:10:02 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  3 02:10:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 962 KiB/s wr, 9 op/s
Dec  3 02:10:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:03 compute-0 nova_compute[351485]: 2025-12-03 02:10:03.313 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:03 compute-0 podman[435731]: 2025-12-03 02:10:03.881866422 +0000 UTC m=+0.130159452 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 02:10:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 821 KiB/s wr, 26 op/s
Dec  3 02:10:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Dec  3 02:10:07 compute-0 nova_compute[351485]: 2025-12-03 02:10:07.295 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  3 02:10:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  3 02:10:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  3 02:10:08 compute-0 nova_compute[351485]: 2025-12-03 02:10:08.317 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:08 compute-0 nova_compute[351485]: 2025-12-03 02:10:08.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec  3 02:10:08 compute-0 podman[435749]: 2025-12-03 02:10:08.873095416 +0000 UTC m=+0.113261065 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, name=ubi9-minimal, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Dec  3 02:10:08 compute-0 podman[435750]: 2025-12-03 02:10:08.889006875 +0000 UTC m=+0.118678918 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:10:08 compute-0 podman[435751]: 2025-12-03 02:10:08.895849418 +0000 UTC m=+0.125599433 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, io.openshift.expose-services=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=edpm, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 02:10:08 compute-0 podman[435758]: 2025-12-03 02:10:08.908294899 +0000 UTC m=+0.117088473 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:10:08 compute-0 podman[435748]: 2025-12-03 02:10:08.908519725 +0000 UTC m=+0.153715246 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true)
Dec  3 02:10:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Dec  3 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.609 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.609 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:10:11 compute-0 nova_compute[351485]: 2025-12-03 02:10:11.610 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:10:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:10:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984081273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.132 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.270 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.271 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.272 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.279 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.280 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.281 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.300 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:12 compute-0 systemd-logind[800]: New session 62 of user zuul.
Dec  3 02:10:12 compute-0 systemd[1]: Started Session 62 of User zuul.
Dec  3 02:10:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.832 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.834 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.834 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.835 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.954 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.955 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.955 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.956 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:10:12 compute-0 nova_compute[351485]: 2025-12-03 02:10:12.982 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.012 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.013 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.036 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.057 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.120 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:10:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.321 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:10:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117898108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.634 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.652 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.688 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:10:13 compute-0 nova_compute[351485]: 2025-12-03 02:10:13.689 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:10:13 compute-0 python3[436072]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 02:10:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 818 B/s wr, 6 op/s
Dec  3 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.690 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.691 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.739 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.739 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:10:15 compute-0 nova_compute[351485]: 2025-12-03 02:10:15.740 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.321 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.322 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.322 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:10:16 compute-0 nova_compute[351485]: 2025-12-03 02:10:16.323 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:10:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:17 compute-0 nova_compute[351485]: 2025-12-03 02:10:17.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.324 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.900 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.926 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.926 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.928 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:18 compute-0 nova_compute[351485]: 2025-12-03 02:10:18.929 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:19 compute-0 nova_compute[351485]: 2025-12-03 02:10:19.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:22 compute-0 nova_compute[351485]: 2025-12-03 02:10:22.309 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:22 compute-0 python3[436287]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 02:10:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:23 compute-0 nova_compute[351485]: 2025-12-03 02:10:23.326 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:10:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ea1c1dc8-1d45-4e20-b1c5-42071807000f does not exist
Dec  3 02:10:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3cd007d3-7655-499c-8629-6363a4042eff does not exist
Dec  3 02:10:23 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 00b75e47-c8ea-4b4c-801c-6284b3d57c16 does not exist
Dec  3 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:10:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:10:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:10:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:10:24 compute-0 nova_compute[351485]: 2025-12-03 02:10:24.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:10:24 compute-0 nova_compute[351485]: 2025-12-03 02:10:24.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:10:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:24 compute-0 podman[436598]: 2025-12-03 02:10:24.932784014 +0000 UTC m=+0.081603102 container create ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:10:24 compute-0 podman[436598]: 2025-12-03 02:10:24.908203831 +0000 UTC m=+0.057022969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:10:25 compute-0 systemd[1]: Started libpod-conmon-ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2.scope.
Dec  3 02:10:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.102108679 +0000 UTC m=+0.250927857 container init ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.115326211 +0000 UTC m=+0.264145299 container start ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.120511997 +0000 UTC m=+0.269331085 container attach ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:10:25 compute-0 stupefied_maxwell[436613]: 167 167
Dec  3 02:10:25 compute-0 systemd[1]: libpod-ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2.scope: Deactivated successfully.
Dec  3 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.128474792 +0000 UTC m=+0.277293910 container died ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e62f0949fa927f2019099c22dfe28512a9d5ff7fc40b804609ff0177511bb14-merged.mount: Deactivated successfully.
Dec  3 02:10:25 compute-0 podman[436598]: 2025-12-03 02:10:25.216801883 +0000 UTC m=+0.365621001 container remove ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:10:25 compute-0 systemd[1]: libpod-conmon-ae3e7c0d0f5f8c4d21657bf4ed2185010d5cdbc906436633b2268a2bd7bde6f2.scope: Deactivated successfully.
Dec  3 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.515278929 +0000 UTC m=+0.097778878 container create 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.473691987 +0000 UTC m=+0.056191996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:10:25 compute-0 systemd[1]: Started libpod-conmon-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope.
Dec  3 02:10:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.734233234 +0000 UTC m=+0.316733233 container init 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.760648569 +0000 UTC m=+0.343148528 container start 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 02:10:25 compute-0 podman[436635]: 2025-12-03 02:10:25.767752569 +0000 UTC m=+0.350252518 container attach 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:10:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:27 compute-0 pedantic_goldstine[436651]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:10:27 compute-0 pedantic_goldstine[436651]: --> relative data size: 1.0
Dec  3 02:10:27 compute-0 pedantic_goldstine[436651]: --> All data devices are unavailable
Dec  3 02:10:27 compute-0 systemd[1]: libpod-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope: Deactivated successfully.
Dec  3 02:10:27 compute-0 podman[436635]: 2025-12-03 02:10:27.050703996 +0000 UTC m=+1.633203925 container died 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:10:27 compute-0 systemd[1]: libpod-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope: Consumed 1.198s CPU time.
Dec  3 02:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-413a5aebb48bc0603420a673a1250e28ca252f907c06ed6847bc67233348a013-merged.mount: Deactivated successfully.
Dec  3 02:10:27 compute-0 podman[436635]: 2025-12-03 02:10:27.150275634 +0000 UTC m=+1.732775583 container remove 3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 02:10:27 compute-0 systemd[1]: libpod-conmon-3857d20e29b1466bae5abcac94f41743f3db3cd077550566f537efafa34c2bfe.scope: Deactivated successfully.
Dec  3 02:10:27 compute-0 nova_compute[351485]: 2025-12-03 02:10:27.312 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:28 compute-0 nova_compute[351485]: 2025-12-03 02:10:28.329 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.435228088 +0000 UTC m=+0.099717843 container create 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:10:28
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.398883253 +0000 UTC m=+0.063373068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:10:28 compute-0 systemd[1]: Started libpod-conmon-44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820.scope.
Dec  3 02:10:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.595668302 +0000 UTC m=+0.260158067 container init 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.612572399 +0000 UTC m=+0.277062144 container start 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.616746226 +0000 UTC m=+0.281235971 container attach 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:10:28 compute-0 hungry_swartz[436842]: 167 167
Dec  3 02:10:28 compute-0 systemd[1]: libpod-44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820.scope: Deactivated successfully.
Dec  3 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.625467812 +0000 UTC m=+0.289957607 container died 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae1cf21c2dd727fbfcfda733bc9e3191a1351b6e7873bf3cf2619f07b750ca9-merged.mount: Deactivated successfully.
Dec  3 02:10:28 compute-0 podman[436826]: 2025-12-03 02:10:28.694649753 +0000 UTC m=+0.359139508 container remove 44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swartz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:10:28 compute-0 systemd[1]: libpod-conmon-44fef424fd784edcc8bf214367ad98596d2d2e5662e952f1962db1f6bbf52820.scope: Deactivated successfully.
Dec  3 02:10:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:28 compute-0 podman[436867]: 2025-12-03 02:10:28.99879614 +0000 UTC m=+0.098440337 container create 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:10:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:28.962883717 +0000 UTC m=+0.062527984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:10:29 compute-0 systemd[1]: Started libpod-conmon-00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4.scope.
Dec  3 02:10:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.147445901 +0000 UTC m=+0.247090118 container init 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.1736318 +0000 UTC m=+0.273275997 container start 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.179227528 +0000 UTC m=+0.278871725 container attach 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 02:10:29 compute-0 podman[436880]: 2025-12-03 02:10:29.181964595 +0000 UTC m=+0.108924883 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  3 02:10:29 compute-0 podman[436884]: 2025-12-03 02:10:29.203520903 +0000 UTC m=+0.111132115 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:10:29 compute-0 podman[436883]: 2025-12-03 02:10:29.215426108 +0000 UTC m=+0.125774587 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:10:29 compute-0 podman[158098]: time="2025-12-03T02:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:10:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45383 "" "Go-http-client/1.1"
Dec  3 02:10:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9062 "" "Go-http-client/1.1"
Dec  3 02:10:29 compute-0 practical_spence[436890]: {
Dec  3 02:10:29 compute-0 practical_spence[436890]:    "0": [
Dec  3 02:10:29 compute-0 practical_spence[436890]:        {
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "devices": [
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "/dev/loop3"
Dec  3 02:10:29 compute-0 practical_spence[436890]:            ],
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_name": "ceph_lv0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_size": "21470642176",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "name": "ceph_lv0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "tags": {
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cluster_name": "ceph",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.crush_device_class": "",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.encrypted": "0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osd_id": "0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.type": "block",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.vdo": "0"
Dec  3 02:10:29 compute-0 practical_spence[436890]:            },
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "type": "block",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "vg_name": "ceph_vg0"
Dec  3 02:10:29 compute-0 practical_spence[436890]:        }
Dec  3 02:10:29 compute-0 practical_spence[436890]:    ],
Dec  3 02:10:29 compute-0 practical_spence[436890]:    "1": [
Dec  3 02:10:29 compute-0 practical_spence[436890]:        {
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "devices": [
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "/dev/loop4"
Dec  3 02:10:29 compute-0 practical_spence[436890]:            ],
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_name": "ceph_lv1",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_size": "21470642176",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "name": "ceph_lv1",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "tags": {
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cluster_name": "ceph",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.crush_device_class": "",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.encrypted": "0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osd_id": "1",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.type": "block",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.vdo": "0"
Dec  3 02:10:29 compute-0 practical_spence[436890]:            },
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "type": "block",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "vg_name": "ceph_vg1"
Dec  3 02:10:29 compute-0 practical_spence[436890]:        }
Dec  3 02:10:29 compute-0 practical_spence[436890]:    ],
Dec  3 02:10:29 compute-0 practical_spence[436890]:    "2": [
Dec  3 02:10:29 compute-0 practical_spence[436890]:        {
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "devices": [
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "/dev/loop5"
Dec  3 02:10:29 compute-0 practical_spence[436890]:            ],
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_name": "ceph_lv2",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_size": "21470642176",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "name": "ceph_lv2",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "tags": {
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.cluster_name": "ceph",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.crush_device_class": "",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.encrypted": "0",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osd_id": "2",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.type": "block",
Dec  3 02:10:29 compute-0 practical_spence[436890]:                "ceph.vdo": "0"
Dec  3 02:10:29 compute-0 practical_spence[436890]:            },
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "type": "block",
Dec  3 02:10:29 compute-0 practical_spence[436890]:            "vg_name": "ceph_vg2"
Dec  3 02:10:29 compute-0 practical_spence[436890]:        }
Dec  3 02:10:29 compute-0 practical_spence[436890]:    ]
Dec  3 02:10:29 compute-0 practical_spence[436890]: }
Dec  3 02:10:29 compute-0 systemd[1]: libpod-00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4.scope: Deactivated successfully.
Dec  3 02:10:29 compute-0 podman[436867]: 2025-12-03 02:10:29.938245311 +0000 UTC m=+1.037889518 container died 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b91985e3d485c7d3a0d3dcedc4da516ab1355474a38f7f5a99900da000dfc679-merged.mount: Deactivated successfully.
Dec  3 02:10:30 compute-0 podman[436867]: 2025-12-03 02:10:30.040971138 +0000 UTC m=+1.140615355 container remove 00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 02:10:30 compute-0 systemd[1]: libpod-conmon-00570f736b1f2fda6061c88c9174a4957b7ca10fa6a06dc305b876c90f7005d4.scope: Deactivated successfully.
Dec  3 02:10:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.275619002 +0000 UTC m=+0.082572049 container create d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:10:31 compute-0 systemd[1]: Started libpod-conmon-d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82.scope.
Dec  3 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.25070468 +0000 UTC m=+0.057657757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:10:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.412503792 +0000 UTC m=+0.219456919 container init d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:10:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:10:31 compute-0 openstack_network_exporter[368278]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:10:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.429157782 +0000 UTC m=+0.236110859 container start d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.43618263 +0000 UTC m=+0.243135697 container attach d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:10:31 compute-0 interesting_kalam[437114]: 167 167
Dec  3 02:10:31 compute-0 systemd[1]: libpod-d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82.scope: Deactivated successfully.
Dec  3 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.443812135 +0000 UTC m=+0.250765242 container died d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6ac96c27c5e45d060d58a2d2808f7528bd10a957e74ef5afa1ce6f041299007-merged.mount: Deactivated successfully.
Dec  3 02:10:31 compute-0 podman[437098]: 2025-12-03 02:10:31.508748176 +0000 UTC m=+0.315701223 container remove d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kalam, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:10:31 compute-0 systemd[1]: libpod-conmon-d5c4865d64ae19e6517a5b68f2034438bec391ca74ac94888de30303944c8d82.scope: Deactivated successfully.
Dec  3 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.760733642 +0000 UTC m=+0.075257993 container create b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:10:31 compute-0 systemd[1]: Started libpod-conmon-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope.
Dec  3 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.736275572 +0000 UTC m=+0.050799953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:10:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.895200304 +0000 UTC m=+0.209724635 container init b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.922715489 +0000 UTC m=+0.237239820 container start b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 02:10:31 compute-0 podman[437137]: 2025-12-03 02:10:31.930078537 +0000 UTC m=+0.244602878 container attach b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:10:32 compute-0 nova_compute[351485]: 2025-12-03 02:10:32.314 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:32 compute-0 python3[437332]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 02:10:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:33 compute-0 crazy_margulis[437175]: {
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "osd_id": 2,
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "type": "bluestore"
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:    },
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "osd_id": 1,
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "type": "bluestore"
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:    },
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "osd_id": 0,
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:        "type": "bluestore"
Dec  3 02:10:33 compute-0 crazy_margulis[437175]:    }
Dec  3 02:10:33 compute-0 crazy_margulis[437175]: }
Dec  3 02:10:33 compute-0 systemd[1]: libpod-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope: Deactivated successfully.
Dec  3 02:10:33 compute-0 systemd[1]: libpod-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope: Consumed 1.174s CPU time.
Dec  3 02:10:33 compute-0 podman[437137]: 2025-12-03 02:10:33.103762623 +0000 UTC m=+1.418286974 container died b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2903f226979ec8344a12a2edf3543e5b9de4ed7c2498cae590f196092a25d6e7-merged.mount: Deactivated successfully.
Dec  3 02:10:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:33 compute-0 podman[437137]: 2025-12-03 02:10:33.201132009 +0000 UTC m=+1.515656330 container remove b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:10:33 compute-0 systemd[1]: libpod-conmon-b349af3503cc411408a9bb7c85ac55079cb008639ddca0ff842b9891ea53f737.scope: Deactivated successfully.
Dec  3 02:10:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:10:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:10:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:10:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:10:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 094a866d-8bac-4eed-a3af-8bee4e936114 does not exist
Dec  3 02:10:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b3f17902-352e-4513-8d9d-aeca4dd6e5dd does not exist
Dec  3 02:10:33 compute-0 nova_compute[351485]: 2025-12-03 02:10:33.332 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:10:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:10:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:34 compute-0 podman[437463]: 2025-12-03 02:10:34.886233506 +0000 UTC m=+0.128209047 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.317081) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836317164, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1052, "num_deletes": 252, "total_data_size": 1439364, "memory_usage": 1459064, "flush_reason": "Manual Compaction"}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836332937, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1413508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33719, "largest_seqno": 34770, "table_properties": {"data_size": 1408336, "index_size": 2632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11329, "raw_average_key_size": 19, "raw_value_size": 1397875, "raw_average_value_size": 2465, "num_data_blocks": 117, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727749, "oldest_key_time": 1764727749, "file_creation_time": 1764727836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 15933 microseconds, and 8455 cpu microseconds.
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.333013) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1413508 bytes OK
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.333035) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.335600) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.335620) EVENT_LOG_v1 {"time_micros": 1764727836335613, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.335639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1434400, prev total WAL file size 1434400, number of live WAL files 2.
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.336584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1380KB)], [77(7445KB)]
Dec  3 02:10:36 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836336624, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9038211, "oldest_snapshot_seqno": -1}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5268 keys, 7279009 bytes, temperature: kUnknown
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836393025, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7279009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7245617, "index_size": 19138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 134279, "raw_average_key_size": 25, "raw_value_size": 7152039, "raw_average_value_size": 1357, "num_data_blocks": 783, "num_entries": 5268, "num_filter_entries": 5268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764727836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.393336) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7279009 bytes
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.395832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.0 rd, 128.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.3 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(11.5) write-amplify(5.1) OK, records in: 5787, records dropped: 519 output_compression: NoCompression
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.395860) EVENT_LOG_v1 {"time_micros": 1764727836395847, "job": 44, "event": "compaction_finished", "compaction_time_micros": 56492, "compaction_time_cpu_micros": 34162, "output_level": 6, "num_output_files": 1, "total_output_size": 7279009, "num_input_records": 5787, "num_output_records": 5268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836396461, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764727836399328, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.336398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:10:36 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:10:36.399745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:10:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:37 compute-0 nova_compute[351485]: 2025-12-03 02:10:37.319 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:38 compute-0 nova_compute[351485]: 2025-12-03 02:10:38.336 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:10:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:39 compute-0 podman[437483]: 2025-12-03 02:10:39.878126407 +0000 UTC m=+0.114512020 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Dec  3 02:10:39 compute-0 podman[437485]: 2025-12-03 02:10:39.884495996 +0000 UTC m=+0.111748282 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=)
Dec  3 02:10:39 compute-0 podman[437484]: 2025-12-03 02:10:39.897901754 +0000 UTC m=+0.129921524 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:10:39 compute-0 podman[437486]: 2025-12-03 02:10:39.901133396 +0000 UTC m=+0.122430164 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:10:39 compute-0 podman[437482]: 2025-12-03 02:10:39.938913031 +0000 UTC m=+0.182934710 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:10:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:42 compute-0 nova_compute[351485]: 2025-12-03 02:10:42.322 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:43 compute-0 nova_compute[351485]: 2025-12-03 02:10:43.340 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:10:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1777396058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:10:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:10:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1777396058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:10:47 compute-0 nova_compute[351485]: 2025-12-03 02:10:47.324 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:48 compute-0 python3[437758]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 02:10:48 compute-0 nova_compute[351485]: 2025-12-03 02:10:48.343 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:52 compute-0 nova_compute[351485]: 2025-12-03 02:10:52.327 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:53 compute-0 nova_compute[351485]: 2025-12-03 02:10:53.348 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:57 compute-0 nova_compute[351485]: 2025-12-03 02:10:57.330 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:10:58 compute-0 nova_compute[351485]: 2025-12-03 02:10:58.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:10:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:10:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:10:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:10:59.640 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:10:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:10:59.642 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:10:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:10:59.643 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:10:59 compute-0 podman[158098]: time="2025-12-03T02:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:10:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:10:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8656 "" "Go-http-client/1.1"
Dec  3 02:10:59 compute-0 podman[437796]: 2025-12-03 02:10:59.879603704 +0000 UTC m=+0.096853440 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  3 02:10:59 compute-0 podman[437797]: 2025-12-03 02:10:59.887380514 +0000 UTC m=+0.097138549 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 02:10:59 compute-0 podman[437798]: 2025-12-03 02:10:59.892795886 +0000 UTC m=+0.098623751 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:11:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:11:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:11:01 compute-0 openstack_network_exporter[368278]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:11:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:11:02 compute-0 nova_compute[351485]: 2025-12-03 02:11:02.335 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:03 compute-0 nova_compute[351485]: 2025-12-03 02:11:03.354 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:05 compute-0 podman[437855]: 2025-12-03 02:11:05.90923552 +0000 UTC m=+0.158289185 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:11:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:07 compute-0 nova_compute[351485]: 2025-12-03 02:11:07.340 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:08 compute-0 nova_compute[351485]: 2025-12-03 02:11:08.358 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:09 compute-0 nova_compute[351485]: 2025-12-03 02:11:09.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:10 compute-0 podman[437877]: 2025-12-03 02:11:10.872679868 +0000 UTC m=+0.105583309 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:11:10 compute-0 podman[437876]: 2025-12-03 02:11:10.885294183 +0000 UTC m=+0.126466167 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Dec  3 02:11:10 compute-0 podman[437875]: 2025-12-03 02:11:10.904199836 +0000 UTC m=+0.150772202 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 02:11:10 compute-0 podman[437883]: 2025-12-03 02:11:10.90433016 +0000 UTC m=+0.134981597 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:11:10 compute-0 podman[437878]: 2025-12-03 02:11:10.914400554 +0000 UTC m=+0.152533392 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 02:11:11 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 02:11:11 compute-0 nova_compute[351485]: 2025-12-03 02:11:11.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.344 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.627 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.628 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.629 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.630 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:11:12 compute-0 nova_compute[351485]: 2025-12-03 02:11:12.631 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:11:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:11:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960510896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.114 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:11:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.247 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.249 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.249 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.259 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.260 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.260 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.361 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.851 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.854 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3604MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.855 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.856 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.976 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:11:13 compute-0 nova_compute[351485]: 2025-12-03 02:11:13.978 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.072 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:11:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 02:11:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:11:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44039771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.546 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.557 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.569 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.572 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:11:14 compute-0 nova_compute[351485]: 2025-12-03 02:11:14.573 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:11:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.574 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.575 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.864 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.865 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:11:15 compute-0 nova_compute[351485]: 2025-12-03 02:11:15.865 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:11:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.347 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.726 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.743 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.744 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:11:17 compute-0 nova_compute[351485]: 2025-12-03 02:11:17.745 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:18 compute-0 nova_compute[351485]: 2025-12-03 02:11:18.363 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:18 compute-0 nova_compute[351485]: 2025-12-03 02:11:18.740 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.509 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.510 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.510 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.519 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'name': 'vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {'metering.server_group': '0f6ab671-23df-4a6d-9613-02f9fb5fb294'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.525 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.525 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:11:19.526283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.568 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 nova_compute[351485]: 2025-12-03 02:11:19.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.606 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/memory.usage volume: 48.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:11:19.607941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.613 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.620 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.620 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.622 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.622 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:11:19.621812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.625 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:11:19.624608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.628 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:11:19.627386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.632 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.632 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:11:19.631888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:11:19.634777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.670 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.671 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.672 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.708 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.709 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.709 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:11:19.712211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.810 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.811 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.811 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.919 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.919 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.920 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.921 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.923 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:11:19.922373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.923 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.bytes volume: 2214 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.924 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.925 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 1930310646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.926 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 271584338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.926 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.latency volume: 193440648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.927 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 1854350820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.927 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 322798135 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.928 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.latency volume: 163317736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:11:19.925343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.930 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:11:19.930302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.931 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.931 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.932 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.932 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.933 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.934 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.935 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.936 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:11:19.935444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.937 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.938 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:11:19.938382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.939 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.940 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.940 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.940 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.941 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.944 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:11:19.944023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.945 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.945 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.946 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.946 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.946 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 8159105015 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:11:19.949161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.950 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 27311239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.950 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.951 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 7224488215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.951 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 31628821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.952 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.953 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.954 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.956 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:11:19.953252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:11:19.956180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/cpu volume: 45360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/cpu volume: 48770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:11:19.957857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:11:19.959424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.960 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.961 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:11:19.960816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:11:19.962332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.963 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.963 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.965 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:11:19.965329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.compute.pollsters [-] b43e79bd-550f-42f8-9aa7-980b6bca3f70/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.968 14 DEBUG ceilometer.compute.pollsters [-] 9182286b-5a08-4961-b4bb-c0e2f05746f7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:11:19.967129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:11:19.968489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:11:19.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:11:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:21 compute-0 nova_compute[351485]: 2025-12-03 02:11:21.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:22 compute-0 nova_compute[351485]: 2025-12-03 02:11:22.352 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec  3 02:11:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:23 compute-0 nova_compute[351485]: 2025-12-03 02:11:23.367 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Dec  3 02:11:26 compute-0 nova_compute[351485]: 2025-12-03 02:11:26.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:11:26 compute-0 nova_compute[351485]: 2025-12-03 02:11:26.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:11:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:11:27 compute-0 nova_compute[351485]: 2025-12-03 02:11:27.354 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:28 compute-0 nova_compute[351485]: 2025-12-03 02:11:28.370 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:11:28
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log']
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:11:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:11:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:11:29 compute-0 podman[158098]: time="2025-12-03T02:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:11:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:11:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 02:11:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:11:30 compute-0 podman[438024]: 2025-12-03 02:11:30.87120193 +0000 UTC m=+0.109397736 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 02:11:30 compute-0 podman[438026]: 2025-12-03 02:11:30.890970037 +0000 UTC m=+0.116218618 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:11:30 compute-0 podman[438025]: 2025-12-03 02:11:30.903040487 +0000 UTC m=+0.133146805 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:11:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:11:31 compute-0 openstack_network_exporter[368278]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:11:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:11:32 compute-0 nova_compute[351485]: 2025-12-03 02:11:32.358 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:11:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:33 compute-0 nova_compute[351485]: 2025-12-03 02:11:33.372 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec  3 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:11:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9b943978-44b9-4ba0-8980-c4371b17c598 does not exist
Dec  3 02:11:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b631a657-cfbe-4aa8-ad09-f082b04e2eb2 does not exist
Dec  3 02:11:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 07e94cd6-1b31-49aa-8da6-c7d3a611bbf8 does not exist
Dec  3 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:11:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:11:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:11:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.025730828 +0000 UTC m=+0.074881933 container create 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:11:36 compute-0 systemd[1]: Started libpod-conmon-3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8.scope.
Dec  3 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:35.994395914 +0000 UTC m=+0.043547059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:11:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.140630778 +0000 UTC m=+0.189781903 container init 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.154510719 +0000 UTC m=+0.203661824 container start 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.159279194 +0000 UTC m=+0.208430369 container attach 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:11:36 compute-0 pensive_goldberg[438363]: 167 167
Dec  3 02:11:36 compute-0 systemd[1]: libpod-3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8.scope: Deactivated successfully.
Dec  3 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.165323974 +0000 UTC m=+0.214475099 container died 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-732b8a1de358f1d8e973b2aa70729e3504abd19604a4526395a898b18b75cfc4-merged.mount: Deactivated successfully.
Dec  3 02:11:36 compute-0 podman[438360]: 2025-12-03 02:11:36.200902798 +0000 UTC m=+0.106707071 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  3 02:11:36 compute-0 podman[438349]: 2025-12-03 02:11:36.227166158 +0000 UTC m=+0.276317253 container remove 3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:11:36 compute-0 systemd[1]: libpod-conmon-3553c3e173bc145591675cfc11b4cc897f8bc84a3419bc0df2a7b934534cc6a8.scope: Deactivated successfully.
Dec  3 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.484130614 +0000 UTC m=+0.082924069 container create 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.450871026 +0000 UTC m=+0.049664551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:11:36 compute-0 systemd[1]: Started libpod-conmon-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope.
Dec  3 02:11:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.660696423 +0000 UTC m=+0.259489918 container init 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.694714462 +0000 UTC m=+0.293507917 container start 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:11:36 compute-0 podman[438404]: 2025-12-03 02:11:36.701944546 +0000 UTC m=+0.300738001 container attach 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:11:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec  3 02:11:37 compute-0 nova_compute[351485]: 2025-12-03 02:11:37.361 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:38 compute-0 heuristic_blackwell[438420]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:11:38 compute-0 heuristic_blackwell[438420]: --> relative data size: 1.0
Dec  3 02:11:38 compute-0 heuristic_blackwell[438420]: --> All data devices are unavailable
Dec  3 02:11:38 compute-0 systemd[1]: libpod-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope: Deactivated successfully.
Dec  3 02:11:38 compute-0 systemd[1]: libpod-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope: Consumed 1.300s CPU time.
Dec  3 02:11:38 compute-0 podman[438404]: 2025-12-03 02:11:38.063313975 +0000 UTC m=+1.662107410 container died 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a45088c12be3f1270f2a8f9dbed7c68b4437fc48f877d219761b072f3fb3e52b-merged.mount: Deactivated successfully.
Dec  3 02:11:38 compute-0 podman[438404]: 2025-12-03 02:11:38.142037755 +0000 UTC m=+1.740831180 container remove 19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:11:38 compute-0 systemd[1]: libpod-conmon-19f5eaa9ec3e93fe71c09ee882d0010fc3baadabc715da97863d17a12742e7ac.scope: Deactivated successfully.
Dec  3 02:11:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:38 compute-0 nova_compute[351485]: 2025-12-03 02:11:38.376 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:11:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.351045417 +0000 UTC m=+0.095330800 container create 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.315807563 +0000 UTC m=+0.060092996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:11:39 compute-0 systemd[1]: Started libpod-conmon-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope.
Dec  3 02:11:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.538291857 +0000 UTC m=+0.282577300 container init 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.55542984 +0000 UTC m=+0.299715233 container start 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:11:39 compute-0 nice_torvalds[438615]: 167 167
Dec  3 02:11:39 compute-0 systemd[1]: libpod-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope: Deactivated successfully.
Dec  3 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.56640791 +0000 UTC m=+0.310693343 container attach 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:11:39 compute-0 conmon[438615]: conmon 4f90c5f705b1fa571820 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope/container/memory.events
Dec  3 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.569937499 +0000 UTC m=+0.314222862 container died 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-77eff13956e429a97fe68c5fa2af15f3d48b3abffb0d54a99623d180b5ff93c1-merged.mount: Deactivated successfully.
Dec  3 02:11:39 compute-0 podman[438599]: 2025-12-03 02:11:39.638253396 +0000 UTC m=+0.382538779 container remove 4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:11:39 compute-0 systemd[1]: libpod-conmon-4f90c5f705b1fa5718203b779e98079b140ba6bdc87b1d7ce620006712763ffe.scope: Deactivated successfully.
Dec  3 02:11:39 compute-0 podman[438639]: 2025-12-03 02:11:39.932488783 +0000 UTC m=+0.081554421 container create da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:39.907051305 +0000 UTC m=+0.056116953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:11:40 compute-0 systemd[1]: Started libpod-conmon-da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6.scope.
Dec  3 02:11:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:40.108897787 +0000 UTC m=+0.257963425 container init da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  3 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:40.131003501 +0000 UTC m=+0.280069119 container start da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:11:40 compute-0 podman[438639]: 2025-12-03 02:11:40.137923046 +0000 UTC m=+0.286988664 container attach da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:11:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:41 compute-0 pedantic_brown[438654]: {
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:    "0": [
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:        {
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "devices": [
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "/dev/loop3"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            ],
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_name": "ceph_lv0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_size": "21470642176",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "name": "ceph_lv0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "tags": {
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cluster_name": "ceph",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.crush_device_class": "",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.encrypted": "0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osd_id": "0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.type": "block",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.vdo": "0"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            },
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "type": "block",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "vg_name": "ceph_vg0"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:        }
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:    ],
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:    "1": [
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:        {
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "devices": [
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "/dev/loop4"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            ],
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_name": "ceph_lv1",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_size": "21470642176",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "name": "ceph_lv1",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "tags": {
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cluster_name": "ceph",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.crush_device_class": "",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.encrypted": "0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osd_id": "1",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.type": "block",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.vdo": "0"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            },
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "type": "block",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "vg_name": "ceph_vg1"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:        }
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:    ],
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:    "2": [
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:        {
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "devices": [
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "/dev/loop5"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            ],
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_name": "ceph_lv2",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_size": "21470642176",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "name": "ceph_lv2",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "tags": {
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.cluster_name": "ceph",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.crush_device_class": "",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.encrypted": "0",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osd_id": "2",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.type": "block",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:                "ceph.vdo": "0"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            },
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "type": "block",
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:            "vg_name": "ceph_vg2"
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:        }
Dec  3 02:11:41 compute-0 pedantic_brown[438654]:    ]
Dec  3 02:11:41 compute-0 pedantic_brown[438654]: }
Dec  3 02:11:41 compute-0 systemd[1]: libpod-da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6.scope: Deactivated successfully.
Dec  3 02:11:41 compute-0 podman[438639]: 2025-12-03 02:11:41.062351303 +0000 UTC m=+1.211416941 container died da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 02:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeb7ab803fc70250d5f895a900853c977ae211e07ffbc4d20350de1f873c539e-merged.mount: Deactivated successfully.
Dec  3 02:11:41 compute-0 podman[438639]: 2025-12-03 02:11:41.183450238 +0000 UTC m=+1.332515846 container remove da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:11:41 compute-0 systemd[1]: libpod-conmon-da16110551b901359e340060da0bf3c7e1675b5fc8f7e1343b22b32ab47ecde6.scope: Deactivated successfully.
Dec  3 02:11:41 compute-0 podman[438670]: 2025-12-03 02:11:41.252008101 +0000 UTC m=+0.133291919 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec  3 02:11:41 compute-0 podman[438672]: 2025-12-03 02:11:41.270684918 +0000 UTC m=+0.122022132 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:11:41 compute-0 podman[438688]: 2025-12-03 02:11:41.285914528 +0000 UTC m=+0.131565491 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Dec  3 02:11:41 compute-0 podman[438686]: 2025-12-03 02:11:41.303439782 +0000 UTC m=+0.129275297 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 02:11:41 compute-0 podman[438664]: 2025-12-03 02:11:41.311854689 +0000 UTC m=+0.190932685 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.15628355 +0000 UTC m=+0.083028851 container create 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.124217547 +0000 UTC m=+0.050962858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:11:42 compute-0 systemd[1]: Started libpod-conmon-924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79.scope.
Dec  3 02:11:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.294206139 +0000 UTC m=+0.220951450 container init 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.314120511 +0000 UTC m=+0.240865822 container start 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.321789037 +0000 UTC m=+0.248534348 container attach 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:11:42 compute-0 boring_rosalind[438932]: 167 167
Dec  3 02:11:42 compute-0 systemd[1]: libpod-924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79.scope: Deactivated successfully.
Dec  3 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.326351186 +0000 UTC m=+0.253096467 container died 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:11:42 compute-0 nova_compute[351485]: 2025-12-03 02:11:42.364 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f70ab416a48a81060452df1ee9963892bc1d7650f341568f3011b3b11ca637-merged.mount: Deactivated successfully.
Dec  3 02:11:42 compute-0 podman[438918]: 2025-12-03 02:11:42.403657166 +0000 UTC m=+0.330402447 container remove 924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:11:42 compute-0 systemd[1]: libpod-conmon-924d9e9112db3bc8523a08d5df3cd3875335605c4a334d672656e2d3bbe51d79.scope: Deactivated successfully.
Dec  3 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.697927714 +0000 UTC m=+0.095698710 container create fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.658989666 +0000 UTC m=+0.056760732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:11:42 compute-0 systemd[1]: Started libpod-conmon-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope.
Dec  3 02:11:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:11:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.876280593 +0000 UTC m=+0.274051629 container init fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.90809769 +0000 UTC m=+0.305868686 container start fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 02:11:42 compute-0 podman[438955]: 2025-12-03 02:11:42.915003855 +0000 UTC m=+0.312774861 container attach fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 02:11:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:43 compute-0 nova_compute[351485]: 2025-12-03 02:11:43.379 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]: {
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "osd_id": 2,
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "type": "bluestore"
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:    },
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "osd_id": 1,
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "type": "bluestore"
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:    },
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "osd_id": 0,
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:        "type": "bluestore"
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]:    }
Dec  3 02:11:44 compute-0 romantic_montalcini[438971]: }
Dec  3 02:11:44 compute-0 systemd[1]: libpod-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope: Deactivated successfully.
Dec  3 02:11:44 compute-0 systemd[1]: libpod-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope: Consumed 1.287s CPU time.
Dec  3 02:11:44 compute-0 podman[439004]: 2025-12-03 02:11:44.307892883 +0000 UTC m=+0.059167550 container died fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 02:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e42037c50d1c580d206729b66e724e7ccc00aef5cf594fecb307415d7041728a-merged.mount: Deactivated successfully.
Dec  3 02:11:44 compute-0 podman[439004]: 2025-12-03 02:11:44.425415347 +0000 UTC m=+0.176690014 container remove fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:11:44 compute-0 systemd[1]: libpod-conmon-fe0d719c741ed5d7cf362d79d1dd66c88ded2bd83319150b1f8debe0c1a4a974.scope: Deactivated successfully.
Dec  3 02:11:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:11:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:11:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:11:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:11:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b3295f73-b368-4d42-9115-547e964cf3bb does not exist
Dec  3 02:11:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bb130fd2-025f-435e-aee3-743db6a496ce does not exist
Dec  3 02:11:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:11:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:11:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:11:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703153391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:11:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:11:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2703153391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:11:47 compute-0 nova_compute[351485]: 2025-12-03 02:11:47.369 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:48 compute-0 nova_compute[351485]: 2025-12-03 02:11:48.383 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:48 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Dec  3 02:11:48 compute-0 systemd[1]: session-62.scope: Consumed 5.493s CPU time.
Dec  3 02:11:48 compute-0 systemd-logind[800]: Session 62 logged out. Waiting for processes to exit.
Dec  3 02:11:48 compute-0 systemd-logind[800]: Removed session 62.
Dec  3 02:11:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:52 compute-0 nova_compute[351485]: 2025-12-03 02:11:52.375 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:53 compute-0 nova_compute[351485]: 2025-12-03 02:11:53.387 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:57 compute-0 nova_compute[351485]: 2025-12-03 02:11:57.379 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:11:58 compute-0 nova_compute[351485]: 2025-12-03 02:11:58.392 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:11:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:11:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:11:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:11:59.641 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:11:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:11:59.642 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:11:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:11:59.643 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:11:59 compute-0 podman[158098]: time="2025-12-03T02:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:11:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:11:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec  3 02:12:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:12:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:12:01 compute-0 openstack_network_exporter[368278]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:12:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:12:01 compute-0 podman[439075]: 2025-12-03 02:12:01.88263634 +0000 UTC m=+0.120921242 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:12:01 compute-0 podman[439073]: 2025-12-03 02:12:01.884576015 +0000 UTC m=+0.130918395 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:12:01 compute-0 podman[439074]: 2025-12-03 02:12:01.927216651 +0000 UTC m=+0.167490319 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 02:12:02 compute-0 nova_compute[351485]: 2025-12-03 02:12:02.384 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:03 compute-0 nova_compute[351485]: 2025-12-03 02:12:03.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:06 compute-0 podman[439131]: 2025-12-03 02:12:06.905444321 +0000 UTC m=+0.151758425 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 02:12:07 compute-0 nova_compute[351485]: 2025-12-03 02:12:07.388 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:08 compute-0 nova_compute[351485]: 2025-12-03 02:12:08.398 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:11 compute-0 nova_compute[351485]: 2025-12-03 02:12:11.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:11 compute-0 podman[439153]: 2025-12-03 02:12:11.869619862 +0000 UTC m=+0.097625263 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:12:11 compute-0 podman[439152]: 2025-12-03 02:12:11.883296169 +0000 UTC m=+0.117542796 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=)
Dec  3 02:12:11 compute-0 podman[439160]: 2025-12-03 02:12:11.88968433 +0000 UTC m=+0.103284683 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:12:11 compute-0 podman[439154]: 2025-12-03 02:12:11.922822658 +0000 UTC m=+0.140867127 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=base rhel9)
Dec  3 02:12:11 compute-0 podman[439151]: 2025-12-03 02:12:11.935139276 +0000 UTC m=+0.175399524 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:12:12 compute-0 nova_compute[351485]: 2025-12-03 02:12:12.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:12 compute-0 nova_compute[351485]: 2025-12-03 02:12:12.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.401 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.642 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.642 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:12:13 compute-0 nova_compute[351485]: 2025-12-03 02:12:13.643 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:12:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:12:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/178662724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.174 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.276 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.277 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.277 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.282 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.283 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.283 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.792 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.795 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.796 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:12:14 compute-0 nova_compute[351485]: 2025-12-03 02:12:14.797 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:12:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.022 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.023 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance b43e79bd-550f-42f8-9aa7-980b6bca3f70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.024 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.025 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.214 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:12:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:12:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352976424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.729 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.745 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.762 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.766 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.767 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.768 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.769 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:12:15 compute-0 nova_compute[351485]: 2025-12-03 02:12:15.799 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:12:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.394 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.793 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.794 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.826 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.826 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:12:17 compute-0 nova_compute[351485]: 2025-12-03 02:12:17.827 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:12:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.405 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.879 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.879 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.880 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:12:18 compute-0 nova_compute[351485]: 2025-12-03 02:12:18.881 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:12:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.911 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [{"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.963 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-9182286b-5a08-4961-b4bb-c0e2f05746f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.964 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.966 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:21 compute-0 nova_compute[351485]: 2025-12-03 02:12:21.967 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:22 compute-0 nova_compute[351485]: 2025-12-03 02:12:22.397 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:23 compute-0 nova_compute[351485]: 2025-12-03 02:12:23.409 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:24 compute-0 nova_compute[351485]: 2025-12-03 02:12:24.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:24 compute-0 nova_compute[351485]: 2025-12-03 02:12:24.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:12:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:27 compute-0 nova_compute[351485]: 2025-12-03 02:12:27.400 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:28 compute-0 nova_compute[351485]: 2025-12-03 02:12:28.412 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:12:28
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta']
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:12:28 compute-0 nova_compute[351485]: 2025-12-03 02:12:28.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:28 compute-0 nova_compute[351485]: 2025-12-03 02:12:28.602 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:12:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:12:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:12:29 compute-0 podman[158098]: time="2025-12-03T02:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:12:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:12:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Dec  3 02:12:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:12:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:12:31 compute-0 openstack_network_exporter[368278]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:12:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:12:32 compute-0 nova_compute[351485]: 2025-12-03 02:12:32.404 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:32 compute-0 podman[439299]: 2025-12-03 02:12:32.878098748 +0000 UTC m=+0.114808729 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 02:12:32 compute-0 podman[439300]: 2025-12-03 02:12:32.901786088 +0000 UTC m=+0.133962211 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:12:32 compute-0 podman[439298]: 2025-12-03 02:12:32.94781617 +0000 UTC m=+0.193039092 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:12:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:33 compute-0 nova_compute[351485]: 2025-12-03 02:12:33.346 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:33 compute-0 nova_compute[351485]: 2025-12-03 02:12:33.414 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:37 compute-0 nova_compute[351485]: 2025-12-03 02:12:37.407 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:37 compute-0 nova_compute[351485]: 2025-12-03 02:12:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:12:37 compute-0 podman[439356]: 2025-12-03 02:12:37.869061777 +0000 UTC m=+0.118666289 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 02:12:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:38 compute-0 nova_compute[351485]: 2025-12-03 02:12:38.417 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:12:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:42 compute-0 nova_compute[351485]: 2025-12-03 02:12:42.412 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:42 compute-0 podman[439380]: 2025-12-03 02:12:42.881057691 +0000 UTC m=+0.105962159 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible)
Dec  3 02:12:42 compute-0 podman[439377]: 2025-12-03 02:12:42.898113784 +0000 UTC m=+0.137930164 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 02:12:42 compute-0 podman[439378]: 2025-12-03 02:12:42.907976313 +0000 UTC m=+0.145031395 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:12:42 compute-0 podman[439376]: 2025-12-03 02:12:42.908042165 +0000 UTC m=+0.150212831 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:12:42 compute-0 podman[439379]: 2025-12-03 02:12:42.909263479 +0000 UTC m=+0.136297047 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Dec  3 02:12:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:43 compute-0 nova_compute[351485]: 2025-12-03 02:12:43.420 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:12:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:12:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989159114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/989159114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 405f590a-c17f-44eb-ad6b-d4408186e97c does not exist
Dec  3 02:12:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 266da04f-cb46-4d6b-9554-53294b60ffc5 does not exist
Dec  3 02:12:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3563e199-36a6-4368-8b3e-1aaea8c4397d does not exist
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:12:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:12:47 compute-0 nova_compute[351485]: 2025-12-03 02:12:47.416 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:48 compute-0 nova_compute[351485]: 2025-12-03 02:12:48.424 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.447026211 +0000 UTC m=+0.107267836 container create 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.404994712 +0000 UTC m=+0.065236387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:12:48 compute-0 systemd[1]: Started libpod-conmon-473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5.scope.
Dec  3 02:12:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.641247877 +0000 UTC m=+0.301489532 container init 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.65835126 +0000 UTC m=+0.318592885 container start 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.66505069 +0000 UTC m=+0.325292305 container attach 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 02:12:48 compute-0 jovial_meninsky[439886]: 167 167
Dec  3 02:12:48 compute-0 systemd[1]: libpod-473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5.scope: Deactivated successfully.
Dec  3 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.674203329 +0000 UTC m=+0.334444954 container died 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 02:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fedc665a06ddf56ee54055670fa062c4b5e5b50f25747657dd99109c6adcc2d-merged.mount: Deactivated successfully.
Dec  3 02:12:48 compute-0 podman[439869]: 2025-12-03 02:12:48.765276916 +0000 UTC m=+0.425518541 container remove 473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:12:48 compute-0 systemd[1]: libpod-conmon-473cb707630fe201dfb27c7483cd90d902b1502a6e94020845faab6828372ec5.scope: Deactivated successfully.
Dec  3 02:12:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.064469141 +0000 UTC m=+0.083025150 container create cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.035618325 +0000 UTC m=+0.054174334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:12:49 compute-0 systemd[1]: Started libpod-conmon-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope.
Dec  3 02:12:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.285277449 +0000 UTC m=+0.303833518 container init cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.311075509 +0000 UTC m=+0.329631518 container start cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 02:12:49 compute-0 podman[439908]: 2025-12-03 02:12:49.317001757 +0000 UTC m=+0.335557816 container attach cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:12:50 compute-0 peaceful_kare[439923]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:12:50 compute-0 peaceful_kare[439923]: --> relative data size: 1.0
Dec  3 02:12:50 compute-0 peaceful_kare[439923]: --> All data devices are unavailable
Dec  3 02:12:50 compute-0 systemd[1]: libpod-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope: Deactivated successfully.
Dec  3 02:12:50 compute-0 podman[439908]: 2025-12-03 02:12:50.639474632 +0000 UTC m=+1.658030641 container died cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 02:12:50 compute-0 systemd[1]: libpod-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope: Consumed 1.235s CPU time.
Dec  3 02:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-23081b9ce55d8c8c02e9c78842c376db11761d13c41978750f328695622bcb75-merged.mount: Deactivated successfully.
Dec  3 02:12:50 compute-0 podman[439908]: 2025-12-03 02:12:50.737314181 +0000 UTC m=+1.755870160 container remove cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:12:50 compute-0 systemd[1]: libpod-conmon-cf1e8b3bb5ec50ef806490d5092b7207d3216a4a0fb1e61111b5ab1afcc2d74e.scope: Deactivated successfully.
Dec  3 02:12:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:51 compute-0 podman[440102]: 2025-12-03 02:12:51.980116234 +0000 UTC m=+0.075282421 container create 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:51.952865563 +0000 UTC m=+0.048031730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:12:52 compute-0 systemd[1]: Started libpod-conmon-6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1.scope.
Dec  3 02:12:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.135165421 +0000 UTC m=+0.230331668 container init 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.151636007 +0000 UTC m=+0.246802194 container start 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.158390178 +0000 UTC m=+0.253556375 container attach 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 02:12:52 compute-0 festive_cerf[440117]: 167 167
Dec  3 02:12:52 compute-0 systemd[1]: libpod-6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1.scope: Deactivated successfully.
Dec  3 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.167952249 +0000 UTC m=+0.263118436 container died 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 02:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5078db38c57c1206de462c038b03721af1d7ed8c837f1e7feb4f380123ff8ea-merged.mount: Deactivated successfully.
Dec  3 02:12:52 compute-0 podman[440102]: 2025-12-03 02:12:52.251011258 +0000 UTC m=+0.346177445 container remove 6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_cerf, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 02:12:52 compute-0 systemd[1]: libpod-conmon-6d601cc658d9a37e028e64fbfd73f41772dfe9f83dfc012b2a14b7852e1f6ab1.scope: Deactivated successfully.
Dec  3 02:12:52 compute-0 nova_compute[351485]: 2025-12-03 02:12:52.420 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.499850129 +0000 UTC m=+0.081396174 container create d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.463376457 +0000 UTC m=+0.044922502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:12:52 compute-0 systemd[1]: Started libpod-conmon-d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c.scope.
Dec  3 02:12:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.700495926 +0000 UTC m=+0.282042011 container init d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.720028008 +0000 UTC m=+0.301574043 container start d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:12:52 compute-0 podman[440140]: 2025-12-03 02:12:52.726255064 +0000 UTC m=+0.307801109 container attach d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  3 02:12:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:53 compute-0 nova_compute[351485]: 2025-12-03 02:12:53.426 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]: {
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:    "0": [
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:        {
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "devices": [
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "/dev/loop3"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            ],
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_name": "ceph_lv0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_size": "21470642176",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "name": "ceph_lv0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "tags": {
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cluster_name": "ceph",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.crush_device_class": "",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.encrypted": "0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osd_id": "0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.type": "block",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.vdo": "0"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            },
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "type": "block",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "vg_name": "ceph_vg0"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:        }
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:    ],
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:    "1": [
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:        {
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "devices": [
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "/dev/loop4"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            ],
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_name": "ceph_lv1",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_size": "21470642176",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "name": "ceph_lv1",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "tags": {
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cluster_name": "ceph",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.crush_device_class": "",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.encrypted": "0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osd_id": "1",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.type": "block",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.vdo": "0"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            },
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "type": "block",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "vg_name": "ceph_vg1"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:        }
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:    ],
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:    "2": [
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:        {
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "devices": [
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "/dev/loop5"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            ],
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_name": "ceph_lv2",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_size": "21470642176",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "name": "ceph_lv2",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "tags": {
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.cluster_name": "ceph",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.crush_device_class": "",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.encrypted": "0",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osd_id": "2",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.type": "block",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:                "ceph.vdo": "0"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            },
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "type": "block",
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:            "vg_name": "ceph_vg2"
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:        }
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]:    ]
Dec  3 02:12:53 compute-0 dazzling_archimedes[440155]: }
Dec  3 02:12:53 compute-0 systemd[1]: libpod-d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c.scope: Deactivated successfully.
Dec  3 02:12:53 compute-0 podman[440140]: 2025-12-03 02:12:53.584190098 +0000 UTC m=+1.165736133 container died d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b56366379b6655a180064af1159cbbceb4f82fe9990d673b4f33a08f6a339b35-merged.mount: Deactivated successfully.
Dec  3 02:12:53 compute-0 podman[440140]: 2025-12-03 02:12:53.683724094 +0000 UTC m=+1.265270109 container remove d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:12:53 compute-0 systemd[1]: libpod-conmon-d6a7519010a42e0a26f2aaa1f1c869e2df718e6d6fb514c11315960f8e62341c.scope: Deactivated successfully.
Dec  3 02:12:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:54 compute-0 podman[440311]: 2025-12-03 02:12:54.9135972 +0000 UTC m=+0.089979347 container create 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:12:54 compute-0 podman[440311]: 2025-12-03 02:12:54.88037579 +0000 UTC m=+0.056757987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:12:54 compute-0 systemd[1]: Started libpod-conmon-515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64.scope.
Dec  3 02:12:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.057382248 +0000 UTC m=+0.233764425 container init 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.074248845 +0000 UTC m=+0.250631002 container start 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.080870223 +0000 UTC m=+0.257252430 container attach 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 02:12:55 compute-0 sharp_lalande[440327]: 167 167
Dec  3 02:12:55 compute-0 systemd[1]: libpod-515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64.scope: Deactivated successfully.
Dec  3 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.087267424 +0000 UTC m=+0.263649571 container died 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a523adb9da6a49bc5aca3bd642737554cea8bff897747f58cc18f51a54fb75-merged.mount: Deactivated successfully.
Dec  3 02:12:55 compute-0 podman[440311]: 2025-12-03 02:12:55.179612877 +0000 UTC m=+0.355995014 container remove 515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:12:55 compute-0 systemd[1]: libpod-conmon-515979227c57bb28438012c99e4f301f1341f9e94d1e887b8f8de25bdf32ae64.scope: Deactivated successfully.
Dec  3 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.464881998 +0000 UTC m=+0.098839958 container create c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.430176046 +0000 UTC m=+0.064134056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:12:55 compute-0 systemd[1]: Started libpod-conmon-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope.
Dec  3 02:12:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.630855334 +0000 UTC m=+0.264813294 container init c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.663431045 +0000 UTC m=+0.297388975 container start c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 02:12:55 compute-0 podman[440351]: 2025-12-03 02:12:55.670150095 +0000 UTC m=+0.304108055 container attach c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:12:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:56 compute-0 determined_booth[440367]: {
Dec  3 02:12:56 compute-0 determined_booth[440367]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "osd_id": 2,
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "type": "bluestore"
Dec  3 02:12:56 compute-0 determined_booth[440367]:    },
Dec  3 02:12:56 compute-0 determined_booth[440367]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "osd_id": 1,
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "type": "bluestore"
Dec  3 02:12:56 compute-0 determined_booth[440367]:    },
Dec  3 02:12:56 compute-0 determined_booth[440367]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "osd_id": 0,
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:12:56 compute-0 determined_booth[440367]:        "type": "bluestore"
Dec  3 02:12:56 compute-0 determined_booth[440367]:    }
Dec  3 02:12:56 compute-0 determined_booth[440367]: }
Dec  3 02:12:56 compute-0 systemd[1]: libpod-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope: Deactivated successfully.
Dec  3 02:12:56 compute-0 systemd[1]: libpod-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope: Consumed 1.287s CPU time.
Dec  3 02:12:57 compute-0 podman[440400]: 2025-12-03 02:12:57.052414404 +0000 UTC m=+0.067825870 container died c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a322f5da64313ffb6c1b0db9fb19fcc3295d89169b98c1bcb4a41fdd1024df71-merged.mount: Deactivated successfully.
Dec  3 02:12:57 compute-0 podman[440400]: 2025-12-03 02:12:57.155005066 +0000 UTC m=+0.170416482 container remove c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 02:12:57 compute-0 systemd[1]: libpod-conmon-c7d3197e2bf0903555389e673208c59b7171ef6a3bb4e9dc3782291078a7b6b2.scope: Deactivated successfully.
Dec  3 02:12:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:12:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:12:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8a0aa3cf-3e19-418a-98a6-2b2d96aee487 does not exist
Dec  3 02:12:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a99c2746-8dc0-4764-81e9-af86ab7387d9 does not exist
Dec  3 02:12:57 compute-0 nova_compute[351485]: 2025-12-03 02:12:57.424 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:12:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:12:58 compute-0 nova_compute[351485]: 2025-12-03 02:12:58.430 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:12:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:12:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.643 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.644 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.645 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.657 351492 DEBUG nova.compute.manager [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.658 351492 DEBUG nova.compute.manager [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing instance network info cache due to event network-changed-6b217cd3-164a-4fb4-8eb6-f1eb3c806963. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.659 351492 DEBUG oslo_concurrency.lockutils [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.660 351492 DEBUG oslo_concurrency.lockutils [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.661 351492 DEBUG nova.network.neutron [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Refreshing network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:12:59 compute-0 podman[158098]: time="2025-12-03T02:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:12:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:12:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.883 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.884 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.885 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.886 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.887 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.890 351492 INFO nova.compute.manager [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Terminating instance#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.892 351492 DEBUG nova.compute.manager [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.909 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:12:59 compute-0 nova_compute[351485]: 2025-12-03 02:12:59.910 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:12:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:12:59.911 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:13:00 compute-0 kernel: tap6b217cd3-16 (unregistering): left promiscuous mode
Dec  3 02:13:00 compute-0 NetworkManager[48912]: <info>  [1764727980.0743] device (tap6b217cd3-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:13:00 compute-0 ovn_controller[89134]: 2025-12-03T02:13:00Z|00058|binding|INFO|Releasing lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 from this chassis (sb_readonly=0)
Dec  3 02:13:00 compute-0 ovn_controller[89134]: 2025-12-03T02:13:00Z|00059|binding|INFO|Setting lport 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 down in Southbound
Dec  3 02:13:00 compute-0 ovn_controller[89134]: 2025-12-03T02:13:00Z|00060|binding|INFO|Removing iface tap6b217cd3-16 ovn-installed in OVS
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.101 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:35:ef 192.168.0.85'], port_security=['fa:16:3e:da:35:ef 192.168.0.85'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:cidrs': '192.168.0.85/24', 'neutron:device_id': 'b43e79bd-550f-42f8-9aa7-980b6bca3f70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-olz3x44nal64-mj7m4uljqyof-c7kfgdonucij-port-nmbntpj2trtj', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=6b217cd3-164a-4fb4-8eb6-f1eb3c806963) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.103 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.105 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba11691-2711-476c-9191-cb6dfd0efa7d#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.114 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.133 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[482bdd4d-44d4-476f-9737-bb00e8a97622]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  3 02:13:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 23.080s CPU time.
Dec  3 02:13:00 compute-0 systemd-machined[138558]: Machine qemu-4-instance-00000004 terminated.
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.189 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[3d86a6e3-285a-4dea-8271-25b3a138b833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.192 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[43c0462b-bbb1-4c67-ae3a-c037acda73b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.222 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[cf829d92-56ff-4379-acf3-218a815b75b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.252 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a0bcd0-1c16-4af7-b454-a90581a0cc97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba11691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:a4:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573048, 'reachable_time': 30697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 440475, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.280 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[77a54cf0-c696-4e66-a271-f0f1a5b36f7c]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573065, 'tstamp': 573065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 440476, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba11691-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573069, 'tstamp': 573069}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 440476, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.282 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.285 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.293 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.294 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba11691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.294 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.295 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba11691-20, col_values=(('external_ids', {'iface-id': '8c8945aa-32be-4ced-a7fe-2b9502f30008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:13:00 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:00.296 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.363 351492 INFO nova.virt.libvirt.driver [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Instance destroyed successfully.#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.364 351492 DEBUG nova.objects.instance [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid b43e79bd-550f-42f8-9aa7-980b6bca3f70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.380 351492 DEBUG nova.virt.libvirt.vif [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:02:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-44nal64-mj7m4uljqyof-c7kfgdonucij-vnf-5nwa6zvischw',id=4,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:02:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='0f6ab671-23df-4a6d-9613-02f9fb5fb294'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-54gvmjwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:02:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  3 02:13:00 compute-0 nova_compute[351485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODIxOTA3NDA5MjMzNjI0MzkxMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyMTkwNzQwOTIzMzYyNDM5MTA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjE5MDc0MDkyMzM2MjQzOTEwPT0tLQo=',user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=b43e79bd-550f-42f8-9aa7-980b6bca3f70,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.380 351492 DEBUG nova.network.os_vif_util [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.381 351492 DEBUG nova.network.os_vif_util [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.382 351492 DEBUG os_vif [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.386 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.386 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b217cd3-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.393 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:00 compute-0 nova_compute[351485]: 2025-12-03 02:13:00.399 351492 INFO os_vif [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:35:ef,bridge_name='br-int',has_traffic_filtering=True,id=6b217cd3-164a-4fb4-8eb6-f1eb3c806963,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b217cd3-16')#033[00m
Dec  3 02:13:00 compute-0 rsyslogd[188612]: message too long (8192) with configured size 8096, begin of message is: 2025-12-03 02:13:00.380 351492 DEBUG nova.virt.libvirt.vif [None req-00c20947-1b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  3 02:13:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 139 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.100 351492 DEBUG nova.network.neutron [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updated VIF entry in instance network info cache for port 6b217cd3-164a-4fb4-8eb6-f1eb3c806963. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.101 351492 DEBUG nova.network.neutron [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [{"id": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "address": "fa:16:3e:da:35:ef", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.85", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b217cd3-16", "ovs_interfaceid": "6b217cd3-164a-4fb4-8eb6-f1eb3c806963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.120 351492 DEBUG oslo_concurrency.lockutils [req-dcc5bb62-07ad-449d-85b2-bd3ada8f2548 req-67b90880-c60a-43b4-a80d-d0984d97d08e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-b43e79bd-550f-42f8-9aa7-980b6bca3f70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:13:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:13:01 compute-0 openstack_network_exporter[368278]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:13:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.776 351492 INFO nova.virt.libvirt.driver [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deleting instance files /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70_del#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.777 351492 INFO nova.virt.libvirt.driver [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deletion of /var/lib/nova/instances/b43e79bd-550f-42f8-9aa7-980b6bca3f70_del complete#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-unplugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.785 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] No waiting events found dispatching network-vif-unplugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-unplugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.786 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.787 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.787 351492 DEBUG oslo_concurrency.lockutils [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.788 351492 DEBUG nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] No waiting events found dispatching network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.788 351492 WARNING nova.compute.manager [req-13fe782a-5b6e-46af-83af-9a99ab98fa3f req-9f2fa56f-475f-4480-8192-7299dcd3f2d8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Received unexpected event network-vif-plugged-6b217cd3-164a-4fb4-8eb6-f1eb3c806963 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.866 351492 INFO nova.compute.manager [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 1.97 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.868 351492 DEBUG oslo.service.loopingcall [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.869 351492 DEBUG nova.compute.manager [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:13:01 compute-0 nova_compute[351485]: 2025-12-03 02:13:01.870 351492 DEBUG nova.network.neutron [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:13:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 116 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 10 op/s
Dec  3 02:13:02 compute-0 nova_compute[351485]: 2025-12-03 02:13:02.970 351492 DEBUG nova.network.neutron [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:13:02 compute-0 nova_compute[351485]: 2025-12-03 02:13:02.991 351492 INFO nova.compute.manager [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Took 1.12 seconds to deallocate network for instance.#033[00m
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.049 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.049 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.160 351492 DEBUG oslo_concurrency.processutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:13:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.433 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:13:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117242781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.642 351492 DEBUG oslo_concurrency.processutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.657 351492 DEBUG nova.compute.provider_tree [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.687 351492 DEBUG nova.scheduler.client.report [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.721 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.768 351492 INFO nova.scheduler.client.report [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance b43e79bd-550f-42f8-9aa7-980b6bca3f70#033[00m
Dec  3 02:13:03 compute-0 podman[440530]: 2025-12-03 02:13:03.857175633 +0000 UTC m=+0.108422108 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:13:03 compute-0 nova_compute[351485]: 2025-12-03 02:13:03.874 351492 DEBUG oslo_concurrency.lockutils [None req-00c20947-1b0f-41ae-befa-b7ac1d1620ad 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "b43e79bd-550f-42f8-9aa7-980b6bca3f70" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:03 compute-0 podman[440531]: 2025-12-03 02:13:03.883655092 +0000 UTC m=+0.137494531 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 02:13:03 compute-0 podman[440532]: 2025-12-03 02:13:03.901273301 +0000 UTC m=+0.141183566 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:13:03 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:03.914 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:13:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 101 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 511 B/s wr, 11 op/s
Dec  3 02:13:05 compute-0 nova_compute[351485]: 2025-12-03 02:13:05.392 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:13:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:08 compute-0 nova_compute[351485]: 2025-12-03 02:13:08.435 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:08 compute-0 podman[440589]: 2025-12-03 02:13:08.884256788 +0000 UTC m=+0.128347162 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:13:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:13:10 compute-0 nova_compute[351485]: 2025-12-03 02:13:10.395 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:13:12 compute-0 nova_compute[351485]: 2025-12-03 02:13:12.603 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec  3 02:13:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.441 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:13:13 compute-0 nova_compute[351485]: 2025-12-03 02:13:13.613 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:13:13 compute-0 podman[440611]: 2025-12-03 02:13:13.869674652 +0000 UTC m=+0.108688406 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64, version=9.6)
Dec  3 02:13:13 compute-0 podman[440612]: 2025-12-03 02:13:13.877786572 +0000 UTC m=+0.117157106 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:13:13 compute-0 podman[440613]: 2025-12-03 02:13:13.886759686 +0000 UTC m=+0.116542309 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec  3 02:13:13 compute-0 podman[440614]: 2025-12-03 02:13:13.888351141 +0000 UTC m=+0.121851329 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Dec  3 02:13:13 compute-0 podman[440610]: 2025-12-03 02:13:13.964163076 +0000 UTC m=+0.210427355 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:13:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:13:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657833017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.144 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.259 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.259 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.260 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:13:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.913 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.915 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=59.9552001953125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.916 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:14 compute-0 nova_compute[351485]: 2025-12-03 02:13:14.917 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.000 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.001 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.002 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.036 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.357 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727980.3559287, b43e79bd-550f-42f8-9aa7-980b6bca3f70 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.358 351492 INFO nova.compute.manager [-] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.390 351492 DEBUG nova.compute.manager [None req-32dde6c3-7b78-419d-997c-a680123d8031 - - - - - -] [instance: b43e79bd-550f-42f8-9aa7-980b6bca3f70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.399 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:13:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268019785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.604 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.618 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.643 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.646 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:13:15 compute-0 nova_compute[351485]: 2025-12-03 02:13:15.646 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  3 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.649 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.650 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.689 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:13:17 compute-0 nova_compute[351485]: 2025-12-03 02:13:17.690 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:18 compute-0 nova_compute[351485]: 2025-12-03 02:13:18.446 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:18 compute-0 nova_compute[351485]: 2025-12-03 02:13:18.612 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.305 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.307 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.308 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.308 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.308 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.311 351492 INFO nova.compute.manager [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Terminating instance#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.313 351492 DEBUG nova.compute.manager [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:13:19 compute-0 kernel: tapd2a50b9b-c2 (unregistering): left promiscuous mode
Dec  3 02:13:19 compute-0 NetworkManager[48912]: <info>  [1764727999.5024] device (tapd2a50b9b-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:13:19 compute-0 ovn_controller[89134]: 2025-12-03T02:13:19Z|00061|binding|INFO|Releasing lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 from this chassis (sb_readonly=0)
Dec  3 02:13:19 compute-0 ovn_controller[89134]: 2025-12-03T02:13:19Z|00062|binding|INFO|Setting lport d2a50b9b-c23e-4e96-a247-ba01de01a3f1 down in Southbound
Dec  3 02:13:19 compute-0 ovn_controller[89134]: 2025-12-03T02:13:19Z|00063|binding|INFO|Removing iface tapd2a50b9b-c2 ovn-installed in OVS
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.510 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.511 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.509 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.513 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e679b680>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.519 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:a6:32 192.168.0.5'], port_security=['fa:16:3e:8f:a6:32 192.168.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.5/24', 'neutron:device_id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9746b242761a48048d185ce26d622b33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43ddbc1b-0018-4ea3-a338-8898d9bf8c87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13e9ae70-0999-47f9-bc0c-397e04263018, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d2a50b9b-c23e-4e96-a247-ba01de01a3f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.522 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d2a50b9b-c23e-4e96-a247-ba01de01a3f1 in datapath 7ba11691-2711-476c-9191-cb6dfd0efa7d unbound from our chassis#033[00m
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.524 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ba11691-2711-476c-9191-cb6dfd0efa7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.527 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc217da-257a-4e81-99c6-caa9bae30e4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.528 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d namespace which is not needed anymore#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.540 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  3 02:13:19 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 37.900s CPU time.
Dec  3 02:13:19 compute-0 systemd-machined[138558]: Machine qemu-1-instance-00000001 terminated.
Dec  3 02:13:19 compute-0 kernel: tapd2a50b9b-c2: entered promiscuous mode
Dec  3 02:13:19 compute-0 kernel: tapd2a50b9b-c2 (unregistering): left promiscuous mode
Dec  3 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : haproxy version is 2.8.14-c23fe91
Dec  3 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [NOTICE]   (414882) : path to executable is /usr/sbin/haproxy
Dec  3 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [WARNING]  (414882) : Exiting Master process...
Dec  3 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [WARNING]  (414882) : Exiting Master process...
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.766 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [ALERT]    (414882) : Current worker (414906) exited with code 143 (Terminated)
Dec  3 02:13:19 compute-0 neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d[414873]: [WARNING]  (414882) : All workers exited. Exiting... (0)
Dec  3 02:13:19 compute-0 systemd[1]: libpod-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28.scope: Deactivated successfully.
Dec  3 02:13:19 compute-0 podman[440784]: 2025-12-03 02:13:19.779454416 +0000 UTC m=+0.091175661 container died 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.786 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9182286b-5a08-4961-b4bb-c0e2f05746f7', 'name': 'test_0', 'flavor': {'id': 'bc665ec6-3672-4e52-a447-5267b04e227a', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '466cf0db-c3be-4d70-b9f3-08c056c2cad9'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'shutdown', 'tenant_id': '9746b242761a48048d185ce26d622b33', 'user_id': '03ba25e4009b43f7b0054fee32bf9136', 'hostId': '875bc95fe8ced0718f70958dc5cab77c14f10a49156218188758f4cd', 'status': 'stopped', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.787 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.788 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.790 351492 INFO nova.virt.libvirt.driver [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Instance destroyed successfully.#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.790 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:13:19.788762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.789 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.792 351492 DEBUG nova.objects.instance [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lazy-loading 'resources' on Instance uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.795 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of memory.usage: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.795 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.797 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:13:19.798963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.packets: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.bytes.delta: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:13:19.803897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:13:19.805633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.packets.drop: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:13:19.807848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.packets.error: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.810 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.811 351492 DEBUG nova.virt.libvirt.vif [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T01:54:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T01:54:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9746b242761a48048d185ce26d622b33',ramdisk_id='',reservation_id='r-2j005007',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='466cf0db-c3be-4d70-b9f3-08c056c2cad9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T01:54:47Z,user_data=None,user_id='03ba25e4009b43f7b0054fee32bf9136',uuid=9182286b-5a08-4961-b4bb-c0e2f05746f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.packets.error: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:13:19.810120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.811 351492 DEBUG nova.network.os_vif_util [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converting VIF {"id": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "address": "fa:16:3e:8f:a6:32", "network": {"id": "7ba11691-2711-476c-9191-cb6dfd0efa7d", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9746b242761a48048d185ce26d622b33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2a50b9b-c2", "ovs_interfaceid": "d2a50b9b-c23e-4e96-a247-ba01de01a3f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.813 351492 DEBUG nova.network.os_vif_util [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.813 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.capacity: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.813 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:13:19.812261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.814 351492 DEBUG os_vif [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.814 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:13:19.814811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.read.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.817 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.817 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.817 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2a50b9b-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:13:19.817050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.819 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:13:19.819361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.823 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.read.latency: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:13:19.825065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.826 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.826 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.read.requests: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.826 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:13:19.827350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of power.state: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:13:19.829719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.830 351492 INFO os_vif [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:a6:32,bridge_name='br-int',has_traffic_filtering=True,id=d2a50b9b-c23e-4e96-a247-ba01de01a3f1,network=Network(7ba11691-2711-476c-9191-cb6dfd0efa7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2a50b9b-c2')#033[00m
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.831 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.usage: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:13:19.832723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.834 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.write.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:13:19.835501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.write.requests: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:13:19.837907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:13:19.840170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.841 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.packets: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.843 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:13:19.842985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28-userdata-shm.mount: Deactivated successfully.
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.850 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of cpu: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.851 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-be2b42ad51a1eafabc174b54703a8a7fc40735ce50000101ab3bd4077ab4d5c6-merged.mount: Deactivated successfully.
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:13:19.852683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:13:19.854798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.857 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.bytes: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:13:19.862669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.864 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of disk.device.allocation: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 podman[440784]: 2025-12-03 02:13:19.871397847 +0000 UTC m=+0.183119092 container cleanup 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.872 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:13:19.872027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.874 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.incoming.packets.drop: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.875 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:13:19.877318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.880 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:13:19.881170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.883 14 DEBUG ceilometer.compute.pollsters [-] Instance 9182286b-5a08-4961-b4bb-c0e2f05746f7 was shut off while getting sample of network.outgoing.bytes.delta: Failed to inspect data of instance <name=instance-00000001, id=9182286b-5a08-4961-b4bb-c0e2f05746f7>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.884 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.885 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.886 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 systemd[1]: libpod-conmon-08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28.scope: Deactivated successfully.
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.887 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.888 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.889 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.889 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.889 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.890 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.890 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.890 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.891 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.891 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.891 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:13:19.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:13:19 compute-0 podman[440830]: 2025-12-03 02:13:19.976712087 +0000 UTC m=+0.068250912 container remove 08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.986 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[43285fa9-cb31-4b8b-a768-ca48373d86e3]: (4, ('Wed Dec  3 02:13:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d (08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28)\n08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28\nWed Dec  3 02:13:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d (08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28)\n08a96f0c99af215211c236242d278753571f77111c0901d8562f775763893a28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.988 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[01673a78-677a-4f0d-9c15-e104bb3222c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.989 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba11691-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 kernel: tap7ba11691-20: left promiscuous mode
Dec  3 02:13:19 compute-0 nova_compute[351485]: 2025-12-03 02:13:19.996 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:19.998 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[9809a70b-3fc2-4abf-9803-a5f5c345baf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.005 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.013 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4b78f996-0e4e-4d28-8dbd-b158ad79ce75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.014 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2998aec4-8bd4-4dbd-b4a3-0e09e2b614c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.036 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[137531a9-9696-4d4e-b4a6-19962e234d46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573032, 'reachable_time': 42038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 440847, 'error': None, 'target': 'ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ba11691\x2d2711\x2d476c\x2d9191\x2dcb6dfd0efa7d.mount: Deactivated successfully.
Dec  3 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.052 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ba11691-2711-476c-9191-cb6dfd0efa7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:13:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:20.054 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[a259cd59-ef0d-43eb-9062-464f4c9e8c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.129 351492 DEBUG nova.compute.manager [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-unplugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.129 351492 DEBUG oslo_concurrency.lockutils [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.130 351492 DEBUG oslo_concurrency.lockutils [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.130 351492 DEBUG oslo_concurrency.lockutils [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.131 351492 DEBUG nova.compute.manager [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] No waiting events found dispatching network-vif-unplugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.132 351492 DEBUG nova.compute.manager [req-14d10a20-f1b3-4bb3-9e8e-1c5552dc2a73 req-1ac8dfc1-8da3-463a-9cd4-97e17a58be28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-unplugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:13:20 compute-0 nova_compute[351485]: 2025-12-03 02:13:20.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Dec  3 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.084 351492 INFO nova.virt.libvirt.driver [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deleting instance files /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7_del#033[00m
Dec  3 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.085 351492 INFO nova.virt.libvirt.driver [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deletion of /var/lib/nova/instances/9182286b-5a08-4961-b4bb-c0e2f05746f7_del complete#033[00m
Dec  3 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.161 351492 INFO nova.compute.manager [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 1.85 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.161 351492 DEBUG oslo.service.loopingcall [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.162 351492 DEBUG nova.compute.manager [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:13:21 compute-0 nova_compute[351485]: 2025-12-03 02:13:21.162 351492 DEBUG nova.network.neutron [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.251 351492 DEBUG nova.compute.manager [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.252 351492 DEBUG oslo_concurrency.lockutils [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.252 351492 DEBUG oslo_concurrency.lockutils [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.252 351492 DEBUG oslo_concurrency.lockutils [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.253 351492 DEBUG nova.compute.manager [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] No waiting events found dispatching network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.253 351492 WARNING nova.compute.manager [req-1697f01c-2816-4b7b-881a-7756f6a7fb0e req-2782cf66-6914-4cfc-bd37-4f9493049dfa 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received unexpected event network-vif-plugged-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.644 351492 DEBUG nova.network.neutron [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.661 351492 INFO nova.compute.manager [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Took 1.50 seconds to deallocate network for instance.#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.709 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.710 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.735 351492 DEBUG nova.compute.manager [req-4f75bc96-6b4a-4f96-9867-39e13ebf7be6 req-a06dd9cd-5261-48df-9d0b-4eb977fdd646 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Received event network-vif-deleted-d2a50b9b-c23e-4e96-a247-ba01de01a3f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:13:22 compute-0 nova_compute[351485]: 2025-12-03 02:13:22.788 351492 DEBUG oslo_concurrency.processutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:13:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 51 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Dec  3 02:13:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:13:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529706778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.318 351492 DEBUG oslo_concurrency.processutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.323 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.330 351492 DEBUG nova.compute.provider_tree [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.361 351492 WARNING nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.361 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 9182286b-5a08-4961-b4bb-c0e2f05746f7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.362 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.367 351492 DEBUG nova.scheduler.client.report [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.401 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.449 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.455 351492 INFO nova.scheduler.client.report [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Deleted allocations for instance 9182286b-5a08-4961-b4bb-c0e2f05746f7#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.560 351492 DEBUG oslo_concurrency.lockutils [None req-c046b59f-4c02-4651-a8e1-906971ac810e 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.562 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.586 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "9182286b-5a08-4961-b4bb-c0e2f05746f7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:23 compute-0 nova_compute[351485]: 2025-12-03 02:13:23.615 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:24 compute-0 nova_compute[351485]: 2025-12-03 02:13:24.823 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 31 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Dec  3 02:13:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:13:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:13:28
Dec  3 02:13:28 compute-0 nova_compute[351485]: 2025-12-03 02:13:28.452 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log']
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:13:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:13:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:13:29 compute-0 nova_compute[351485]: 2025-12-03 02:13:29.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:13:29 compute-0 nova_compute[351485]: 2025-12-03 02:13:29.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:13:29 compute-0 podman[158098]: time="2025-12-03T02:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:13:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:13:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8177 "" "Go-http-client/1.1"
Dec  3 02:13:29 compute-0 nova_compute[351485]: 2025-12-03 02:13:29.827 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  3 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:13:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:13:31 compute-0 openstack_network_exporter[368278]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:13:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:13:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec  3 02:13:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:33 compute-0 nova_compute[351485]: 2025-12-03 02:13:33.455 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.785 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764727999.7830377, 9182286b-5a08-4961-b4bb-c0e2f05746f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.785 351492 INFO nova.compute.manager [-] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.802 351492 DEBUG nova.compute.manager [None req-399f83d3-a8c6-4f88-9dde-2680da6c20e6 - - - - - -] [instance: 9182286b-5a08-4961-b4bb-c0e2f05746f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:13:34 compute-0 nova_compute[351485]: 2025-12-03 02:13:34.830 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:34 compute-0 podman[440872]: 2025-12-03 02:13:34.86242773 +0000 UTC m=+0.113640596 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:13:34 compute-0 podman[440874]: 2025-12-03 02:13:34.876130027 +0000 UTC m=+0.116014953 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:13:34 compute-0 podman[440873]: 2025-12-03 02:13:34.888706403 +0000 UTC m=+0.133080906 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:13:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec  3 02:13:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 4 op/s
Dec  3 02:13:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:38 compute-0 nova_compute[351485]: 2025-12-03 02:13:38.459 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:13:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:39 compute-0 nova_compute[351485]: 2025-12-03 02:13:39.832 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:39 compute-0 podman[440930]: 2025-12-03 02:13:39.879140678 +0000 UTC m=+0.131294096 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 02:13:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:43 compute-0 nova_compute[351485]: 2025-12-03 02:13:43.463 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:44 compute-0 nova_compute[351485]: 2025-12-03 02:13:44.837 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:44 compute-0 podman[440955]: 2025-12-03 02:13:44.844454622 +0000 UTC m=+0.111013152 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, distribution-scope=public, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, container_name=kepler, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 02:13:44 compute-0 podman[440961]: 2025-12-03 02:13:44.850631637 +0000 UTC m=+0.106631438 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  3 02:13:44 compute-0 podman[440954]: 2025-12-03 02:13:44.85533097 +0000 UTC m=+0.133667113 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:13:44 compute-0 podman[440953]: 2025-12-03 02:13:44.875193592 +0000 UTC m=+0.156065017 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 02:13:44 compute-0 podman[440952]: 2025-12-03 02:13:44.903456321 +0000 UTC m=+0.193094384 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:13:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:13:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077358783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:13:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:13:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077358783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:13:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:48 compute-0 nova_compute[351485]: 2025-12-03 02:13:48.466 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:49 compute-0 nova_compute[351485]: 2025-12-03 02:13:49.840 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:52 compute-0 ovn_controller[89134]: 2025-12-03T02:13:52Z|00064|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  3 02:13:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:53 compute-0 nova_compute[351485]: 2025-12-03 02:13:53.469 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:54 compute-0 nova_compute[351485]: 2025-12-03 02:13:54.844 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:13:58 compute-0 nova_compute[351485]: 2025-12-03 02:13:58.472 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 82665bd8-1d24-4239-ac89-4e1cf701ada3 does not exist
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 87c00d6d-dacd-4bee-a59d-2aac79096476 does not exist
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f5eca8ff-705a-4915-bb84-772bbe55622b does not exist
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:13:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:13:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:13:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:13:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:13:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:59.645 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:13:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:59.646 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:13:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:13:59.646 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:13:59 compute-0 podman[158098]: time="2025-12-03T02:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:13:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:13:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8187 "" "Go-http-client/1.1"
Dec  3 02:13:59 compute-0 nova_compute[351485]: 2025-12-03 02:13:59.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.062956566 +0000 UTC m=+0.119557414 container create 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.002467475 +0000 UTC m=+0.059068363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:14:00 compute-0 systemd[1]: Started libpod-conmon-9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e.scope.
Dec  3 02:14:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.244364989 +0000 UTC m=+0.300965897 container init 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.264835858 +0000 UTC m=+0.321436696 container start 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.271085695 +0000 UTC m=+0.327686543 container attach 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:14:00 compute-0 thirsty_williamson[441343]: 167 167
Dec  3 02:14:00 compute-0 systemd[1]: libpod-9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e.scope: Deactivated successfully.
Dec  3 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.278808323 +0000 UTC m=+0.335409171 container died 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 02:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-60cb65c6f35898da597774f7f4d409c7e47881663410b208bbfb5a2474299c98-merged.mount: Deactivated successfully.
Dec  3 02:14:00 compute-0 podman[441324]: 2025-12-03 02:14:00.365783344 +0000 UTC m=+0.422384182 container remove 9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:14:00 compute-0 systemd[1]: libpod-conmon-9b7f50cdb6ec18416a28577f937f33fc371640f8a7acdf291e6aa78ee4f28b5e.scope: Deactivated successfully.
Dec  3 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.676944228 +0000 UTC m=+0.099077535 container create beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.641914237 +0000 UTC m=+0.064047614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:14:00 compute-0 systemd[1]: Started libpod-conmon-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope.
Dec  3 02:14:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.85338642 +0000 UTC m=+0.275519807 container init beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.891389385 +0000 UTC m=+0.313522702 container start beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 02:14:00 compute-0 podman[441366]: 2025-12-03 02:14:00.898584169 +0000 UTC m=+0.320717546 container attach beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:14:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:14:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:14:01 compute-0 openstack_network_exporter[368278]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:14:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:14:02 compute-0 focused_noether[441382]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:14:02 compute-0 focused_noether[441382]: --> relative data size: 1.0
Dec  3 02:14:02 compute-0 focused_noether[441382]: --> All data devices are unavailable
Dec  3 02:14:02 compute-0 systemd[1]: libpod-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope: Deactivated successfully.
Dec  3 02:14:02 compute-0 systemd[1]: libpod-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope: Consumed 1.169s CPU time.
Dec  3 02:14:02 compute-0 podman[441411]: 2025-12-03 02:14:02.206840832 +0000 UTC m=+0.049478161 container died beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 02:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8ec453aa34f75a8563b70bc79a2f8daec1ead861f0b22c9268da00b16ccf4aa-merged.mount: Deactivated successfully.
Dec  3 02:14:02 compute-0 podman[441411]: 2025-12-03 02:14:02.315424724 +0000 UTC m=+0.158062053 container remove beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:14:02 compute-0 systemd[1]: libpod-conmon-beb344986d8fcb56ea1ae3527b4834f52a424cbfa5de7dda874c4e3080e076e9.scope: Deactivated successfully.
Dec  3 02:14:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:03 compute-0 nova_compute[351485]: 2025-12-03 02:14:03.476 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.48234623 +0000 UTC m=+0.101640146 container create 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.435242638 +0000 UTC m=+0.054536604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:14:03 compute-0 systemd[1]: Started libpod-conmon-5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9.scope.
Dec  3 02:14:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.614622533 +0000 UTC m=+0.233916449 container init 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.625612554 +0000 UTC m=+0.244906430 container start 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.630832722 +0000 UTC m=+0.250126608 container attach 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:14:03 compute-0 friendly_blackburn[441577]: 167 167
Dec  3 02:14:03 compute-0 systemd[1]: libpod-5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9.scope: Deactivated successfully.
Dec  3 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.632767246 +0000 UTC m=+0.252061122 container died 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d545dcbb67281ede438f0e7fbb66a8401c822d77d7fe08ccb547a39b7c3d929-merged.mount: Deactivated successfully.
Dec  3 02:14:03 compute-0 podman[441561]: 2025-12-03 02:14:03.678443419 +0000 UTC m=+0.297737295 container remove 5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:14:03 compute-0 systemd[1]: libpod-conmon-5e3948b2805792919e8864e7eee64b842fccc98d38c0e586b70fcfa3134d6fc9.scope: Deactivated successfully.
Dec  3 02:14:03 compute-0 podman[441600]: 2025-12-03 02:14:03.93402085 +0000 UTC m=+0.099875847 container create f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:14:03 compute-0 podman[441600]: 2025-12-03 02:14:03.8969347 +0000 UTC m=+0.062789757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:14:04 compute-0 systemd[1]: Started libpod-conmon-f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7.scope.
Dec  3 02:14:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:04 compute-0 podman[441600]: 2025-12-03 02:14:04.098419701 +0000 UTC m=+0.264274758 container init f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:14:04 compute-0 podman[441600]: 2025-12-03 02:14:04.122966745 +0000 UTC m=+0.288821732 container start f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 02:14:04 compute-0 podman[441600]: 2025-12-03 02:14:04.129928592 +0000 UTC m=+0.295783629 container attach f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:14:04 compute-0 nova_compute[351485]: 2025-12-03 02:14:04.853 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]: {
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:    "0": [
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:        {
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "devices": [
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "/dev/loop3"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            ],
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_name": "ceph_lv0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_size": "21470642176",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "name": "ceph_lv0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "tags": {
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cluster_name": "ceph",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.crush_device_class": "",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.encrypted": "0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osd_id": "0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.type": "block",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.vdo": "0"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            },
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "type": "block",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "vg_name": "ceph_vg0"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:        }
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:    ],
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:    "1": [
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:        {
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "devices": [
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "/dev/loop4"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            ],
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_name": "ceph_lv1",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_size": "21470642176",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "name": "ceph_lv1",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "tags": {
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cluster_name": "ceph",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.crush_device_class": "",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.encrypted": "0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osd_id": "1",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.type": "block",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.vdo": "0"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            },
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "type": "block",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "vg_name": "ceph_vg1"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:        }
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:    ],
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:    "2": [
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:        {
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "devices": [
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "/dev/loop5"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            ],
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_name": "ceph_lv2",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_size": "21470642176",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "name": "ceph_lv2",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "tags": {
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.cluster_name": "ceph",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.crush_device_class": "",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.encrypted": "0",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osd_id": "2",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.type": "block",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:                "ceph.vdo": "0"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            },
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "type": "block",
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:            "vg_name": "ceph_vg2"
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:        }
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]:    ]
Dec  3 02:14:04 compute-0 beautiful_bassi[441615]: }
Dec  3 02:14:05 compute-0 systemd[1]: libpod-f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7.scope: Deactivated successfully.
Dec  3 02:14:05 compute-0 podman[441600]: 2025-12-03 02:14:05.007875432 +0000 UTC m=+1.173730429 container died f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-30183fd9da6cc9f7bed24e6e0e93bef1162a6293c5687d88c246698da01547aa-merged.mount: Deactivated successfully.
Dec  3 02:14:05 compute-0 podman[441600]: 2025-12-03 02:14:05.117503994 +0000 UTC m=+1.283358961 container remove f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 02:14:05 compute-0 systemd[1]: libpod-conmon-f188bb8dfff5b1e3421894af88f029aca86c67a5e5a666da4dd436c60a4bd4f7.scope: Deactivated successfully.
Dec  3 02:14:05 compute-0 podman[441627]: 2025-12-03 02:14:05.1459958 +0000 UTC m=+0.089846083 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:14:05 compute-0 podman[441636]: 2025-12-03 02:14:05.179590231 +0000 UTC m=+0.118680029 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:14:05 compute-0 podman[441633]: 2025-12-03 02:14:05.180503926 +0000 UTC m=+0.120592422 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.253859174 +0000 UTC m=+0.104440906 container create 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.209207701 +0000 UTC m=+0.059789463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:14:06 compute-0 systemd[1]: Started libpod-conmon-565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958.scope.
Dec  3 02:14:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.378190662 +0000 UTC m=+0.228772394 container init 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.397839538 +0000 UTC m=+0.248421240 container start 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.402908831 +0000 UTC m=+0.253490533 container attach 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec  3 02:14:06 compute-0 flamboyant_elion[441846]: 167 167
Dec  3 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.413218073 +0000 UTC m=+0.263799785 container died 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:14:06 compute-0 systemd[1]: libpod-565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958.scope: Deactivated successfully.
Dec  3 02:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc95364e5da55729124ba7e1bffa5451044b07321af1d3b7c2816e3236bc3a48-merged.mount: Deactivated successfully.
Dec  3 02:14:06 compute-0 podman[441831]: 2025-12-03 02:14:06.478076838 +0000 UTC m=+0.328658530 container remove 565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:14:06 compute-0 systemd[1]: libpod-conmon-565cdda18ba721415bf6693e664ac5a256926ec41086d1e484f99da5f046c958.scope: Deactivated successfully.
Dec  3 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.791352751 +0000 UTC m=+0.113351308 container create 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.740943405 +0000 UTC m=+0.062942042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:14:06 compute-0 systemd[1]: Started libpod-conmon-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope.
Dec  3 02:14:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:14:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.957993536 +0000 UTC m=+0.279992163 container init 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.990976539 +0000 UTC m=+0.312975116 container start 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:14:06 compute-0 podman[441869]: 2025-12-03 02:14:06.998496382 +0000 UTC m=+0.320495029 container attach 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:14:08 compute-0 mystifying_pike[441885]: {
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "osd_id": 2,
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "type": "bluestore"
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:    },
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "osd_id": 1,
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "type": "bluestore"
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:    },
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "osd_id": 0,
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:        "type": "bluestore"
Dec  3 02:14:08 compute-0 mystifying_pike[441885]:    }
Dec  3 02:14:08 compute-0 mystifying_pike[441885]: }
Dec  3 02:14:08 compute-0 systemd[1]: libpod-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope: Deactivated successfully.
Dec  3 02:14:08 compute-0 podman[441869]: 2025-12-03 02:14:08.223741178 +0000 UTC m=+1.545739765 container died 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:14:08 compute-0 systemd[1]: libpod-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope: Consumed 1.236s CPU time.
Dec  3 02:14:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a19ad806ea7e23c893de836694e12fb63198e37f56af711f6443e1932a28ad76-merged.mount: Deactivated successfully.
Dec  3 02:14:08 compute-0 podman[441869]: 2025-12-03 02:14:08.317943253 +0000 UTC m=+1.639941840 container remove 8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_pike, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:14:08 compute-0 systemd[1]: libpod-conmon-8431f9c48f6eb5e0e946416a058e79182af1dfac2b192cfeb50e82a1383de76b.scope: Deactivated successfully.
Dec  3 02:14:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:14:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:14:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:14:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:14:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2522e6e4-962e-4894-b6b8-bd453a8efaaf does not exist
Dec  3 02:14:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 83d52cef-ee93-4211-a11d-f348f9c9d6aa does not exist
Dec  3 02:14:08 compute-0 nova_compute[351485]: 2025-12-03 02:14:08.479 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:14:09 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:14:09 compute-0 nova_compute[351485]: 2025-12-03 02:14:09.857 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:10 compute-0 podman[441979]: 2025-12-03 02:14:10.910148714 +0000 UTC m=+0.154336237 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:14:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:12 compute-0 nova_compute[351485]: 2025-12-03 02:14:12.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.481 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.642 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:14:13 compute-0 nova_compute[351485]: 2025-12-03 02:14:13.642 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:14:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:14:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809762466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.198 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.802 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.805 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4126MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.806 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.807 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.862 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.896 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.897 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:14:14 compute-0 nova_compute[351485]: 2025-12-03 02:14:14.917 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:14:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:14:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937025677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.440 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.456 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.830 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.867 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:14:15 compute-0 nova_compute[351485]: 2025-12-03 02:14:15.867 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:14:15 compute-0 podman[442044]: 2025-12-03 02:14:15.883491195 +0000 UTC m=+0.114535901 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:14:15 compute-0 podman[442043]: 2025-12-03 02:14:15.885159782 +0000 UTC m=+0.129880475 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git)
Dec  3 02:14:15 compute-0 podman[442054]: 2025-12-03 02:14:15.888503517 +0000 UTC m=+0.106071902 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2)
Dec  3 02:14:15 compute-0 podman[442045]: 2025-12-03 02:14:15.924223198 +0000 UTC m=+0.142830682 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., version=9.4, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, release=1214.1726694543)
Dec  3 02:14:15 compute-0 podman[442042]: 2025-12-03 02:14:15.946997872 +0000 UTC m=+0.192928830 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:14:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:17 compute-0 nova_compute[351485]: 2025-12-03 02:14:17.868 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:17 compute-0 nova_compute[351485]: 2025-12-03 02:14:17.869 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:14:17 compute-0 nova_compute[351485]: 2025-12-03 02:14:17.870 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:14:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.423 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.424 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.425 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:18 compute-0 nova_compute[351485]: 2025-12-03 02:14:18.484 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:19 compute-0 nova_compute[351485]: 2025-12-03 02:14:19.127 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:19 compute-0 nova_compute[351485]: 2025-12-03 02:14:19.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:19 compute-0 nova_compute[351485]: 2025-12-03 02:14:19.868 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:22 compute-0 nova_compute[351485]: 2025-12-03 02:14:22.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:23 compute-0 nova_compute[351485]: 2025-12-03 02:14:23.487 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:23 compute-0 nova_compute[351485]: 2025-12-03 02:14:23.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:24 compute-0 nova_compute[351485]: 2025-12-03 02:14:24.872 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.144164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067144235, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2047, "num_deletes": 251, "total_data_size": 3465843, "memory_usage": 3521016, "flush_reason": "Manual Compaction"}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067171886, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3400030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34771, "largest_seqno": 36817, "table_properties": {"data_size": 3390593, "index_size": 5995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18586, "raw_average_key_size": 20, "raw_value_size": 3372034, "raw_average_value_size": 3641, "num_data_blocks": 266, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764727837, "oldest_key_time": 1764727837, "file_creation_time": 1764728067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 28313 microseconds, and 14375 cpu microseconds.
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.172478) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3400030 bytes OK
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.173248) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.176862) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.176898) EVENT_LOG_v1 {"time_micros": 1764728067176886, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.176927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3457287, prev total WAL file size 3457287, number of live WAL files 2.
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.183399) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3320KB)], [80(7108KB)]
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067183447, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10679039, "oldest_snapshot_seqno": -1}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5680 keys, 8921779 bytes, temperature: kUnknown
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067256405, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8921779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8884043, "index_size": 22458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 143438, "raw_average_key_size": 25, "raw_value_size": 8781518, "raw_average_value_size": 1546, "num_data_blocks": 922, "num_entries": 5680, "num_filter_entries": 5680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.257402) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8921779 bytes
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.260721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.9 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 6194, records dropped: 514 output_compression: NoCompression
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.260754) EVENT_LOG_v1 {"time_micros": 1764728067260738, "job": 46, "event": "compaction_finished", "compaction_time_micros": 73185, "compaction_time_cpu_micros": 44350, "output_level": 6, "num_output_files": 1, "total_output_size": 8921779, "num_input_records": 6194, "num_output_records": 5680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067262850, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728067266409, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.183237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:14:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:14:27.266695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:14:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:14:28
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'vms', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:14:28 compute-0 nova_compute[351485]: 2025-12-03 02:14:28.489 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:14:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:14:29 compute-0 podman[158098]: time="2025-12-03T02:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:14:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:14:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8179 "" "Go-http-client/1.1"
Dec  3 02:14:29 compute-0 nova_compute[351485]: 2025-12-03 02:14:29.876 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:30 compute-0 nova_compute[351485]: 2025-12-03 02:14:30.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:14:30 compute-0 nova_compute[351485]: 2025-12-03 02:14:30.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:14:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:14:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:14:31 compute-0 openstack_network_exporter[368278]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:14:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:14:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:33 compute-0 nova_compute[351485]: 2025-12-03 02:14:33.491 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:34 compute-0 nova_compute[351485]: 2025-12-03 02:14:34.880 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:35 compute-0 podman[442148]: 2025-12-03 02:14:35.84889292 +0000 UTC m=+0.102213293 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:14:35 compute-0 podman[442149]: 2025-12-03 02:14:35.869273116 +0000 UTC m=+0.129798343 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 02:14:35 compute-0 podman[442150]: 2025-12-03 02:14:35.870946824 +0000 UTC m=+0.109324424 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:14:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:38 compute-0 nova_compute[351485]: 2025-12-03 02:14:38.494 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:14:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:39 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:39.458 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:14:39 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:39.460 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:14:39 compute-0 nova_compute[351485]: 2025-12-03 02:14:39.462 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:39 compute-0 nova_compute[351485]: 2025-12-03 02:14:39.884 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:41 compute-0 podman[442206]: 2025-12-03 02:14:41.876251842 +0000 UTC m=+0.126003516 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:14:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:43 compute-0 nova_compute[351485]: 2025-12-03 02:14:43.496 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:44 compute-0 nova_compute[351485]: 2025-12-03 02:14:44.888 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 16 MiB data, 225 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:14:45 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:45.463 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:14:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  3 02:14:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  3 02:14:46 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  3 02:14:46 compute-0 podman[442226]: 2025-12-03 02:14:46.872417769 +0000 UTC m=+0.114436969 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 02:14:46 compute-0 podman[442232]: 2025-12-03 02:14:46.87705241 +0000 UTC m=+0.106371181 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Dec  3 02:14:46 compute-0 podman[442225]: 2025-12-03 02:14:46.896960173 +0000 UTC m=+0.146951708 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  3 02:14:46 compute-0 podman[442227]: 2025-12-03 02:14:46.902962883 +0000 UTC m=+0.139549869 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:14:46 compute-0 podman[442228]: 2025-12-03 02:14:46.914474679 +0000 UTC m=+0.147637098 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec  3 02:14:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 24 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 819 KiB/s wr, 5 op/s
Dec  3 02:14:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:14:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929566334' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:14:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:14:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/929566334' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:14:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  3 02:14:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  3 02:14:48 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  3 02:14:48 compute-0 nova_compute[351485]: 2025-12-03 02:14:48.499 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 24 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1024 KiB/s wr, 7 op/s
Dec  3 02:14:49 compute-0 nova_compute[351485]: 2025-12-03 02:14:49.892 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.6 MiB/s wr, 40 op/s
Dec  3 02:14:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec  3 02:14:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:53 compute-0 nova_compute[351485]: 2025-12-03 02:14:53.503 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:54 compute-0 nova_compute[351485]: 2025-12-03 02:14:54.896 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.8 MiB/s wr, 37 op/s
Dec  3 02:14:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.3 MiB/s wr, 32 op/s
Dec  3 02:14:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:14:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:14:58 compute-0 nova_compute[351485]: 2025-12-03 02:14:58.505 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:14:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 MiB/s wr, 30 op/s
Dec  3 02:14:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:14:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:14:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:14:59.648 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:14:59 compute-0 podman[158098]: time="2025-12-03T02:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:14:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:14:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8181 "" "Go-http-client/1.1"
Dec  3 02:14:59 compute-0 nova_compute[351485]: 2025-12-03 02:14:59.901 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 26 op/s
Dec  3 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:15:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:15:01 compute-0 openstack_network_exporter[368278]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:15:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:15:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 379 KiB/s wr, 4 op/s
Dec  3 02:15:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:03 compute-0 nova_compute[351485]: 2025-12-03 02:15:03.508 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:04 compute-0 nova_compute[351485]: 2025-12-03 02:15:04.905 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:06 compute-0 podman[442335]: 2025-12-03 02:15:06.869732586 +0000 UTC m=+0.102534282 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:15:06 compute-0 podman[442333]: 2025-12-03 02:15:06.878421132 +0000 UTC m=+0.125252755 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:15:06 compute-0 podman[442334]: 2025-12-03 02:15:06.883758443 +0000 UTC m=+0.123317170 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:15:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:08 compute-0 nova_compute[351485]: 2025-12-03 02:15:08.512 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:09 compute-0 ovn_controller[89134]: 2025-12-03T02:15:09Z|00065|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Dec  3 02:15:09 compute-0 nova_compute[351485]: 2025-12-03 02:15:09.909 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:15:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:15:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:15:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:15:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:15:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:15:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev eb3b30b3-5a11-4eb4-90a2-ced56086bd49 does not exist
Dec  3 02:15:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d720c6ec-df24-43c4-92dc-cb06b3489e78 does not exist
Dec  3 02:15:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 86e65e3f-e748-41b4-8c40-e90827f8de9f does not exist
Dec  3 02:15:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:15:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:15:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:15:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:15:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:15:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:15:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:15:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.156438959 +0000 UTC m=+0.086707064 container create 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.122419957 +0000 UTC m=+0.052688102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:15:11 compute-0 systemd[1]: Started libpod-conmon-31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1.scope.
Dec  3 02:15:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.314132341 +0000 UTC m=+0.244400496 container init 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.331488382 +0000 UTC m=+0.261756487 container start 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.339277732 +0000 UTC m=+0.269545887 container attach 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:15:11 compute-0 jolly_darwin[442673]: 167 167
Dec  3 02:15:11 compute-0 systemd[1]: libpod-31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1.scope: Deactivated successfully.
Dec  3 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.346826776 +0000 UTC m=+0.277094871 container died 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb462b7c75f5a5a2b708adc3ba0e85e536539b042f7275e3e49268750b35df7-merged.mount: Deactivated successfully.
Dec  3 02:15:11 compute-0 podman[442656]: 2025-12-03 02:15:11.429489895 +0000 UTC m=+0.359757970 container remove 31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:15:11 compute-0 systemd[1]: libpod-conmon-31de91df52f3e3246c049274c515c908b43fb2d05e8eeb0da3e14c239cb46df1.scope: Deactivated successfully.
Dec  3 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.707310275 +0000 UTC m=+0.095895474 container create 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.671601685 +0000 UTC m=+0.060186934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:15:11 compute-0 systemd[1]: Started libpod-conmon-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope.
Dec  3 02:15:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.902520928 +0000 UTC m=+0.291106177 container init 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.924763717 +0000 UTC m=+0.313348916 container start 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:15:11 compute-0 podman[442697]: 2025-12-03 02:15:11.932697112 +0000 UTC m=+0.321282361 container attach 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:15:12 compute-0 nova_compute[351485]: 2025-12-03 02:15:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:12 compute-0 podman[442722]: 2025-12-03 02:15:12.872679537 +0000 UTC m=+0.113521083 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec  3 02:15:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:13 compute-0 optimistic_hermann[442713]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:15:13 compute-0 optimistic_hermann[442713]: --> relative data size: 1.0
Dec  3 02:15:13 compute-0 optimistic_hermann[442713]: --> All data devices are unavailable
Dec  3 02:15:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:13 compute-0 systemd[1]: libpod-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope: Deactivated successfully.
Dec  3 02:15:13 compute-0 systemd[1]: libpod-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope: Consumed 1.323s CPU time.
Dec  3 02:15:13 compute-0 podman[442697]: 2025-12-03 02:15:13.300784739 +0000 UTC m=+1.689369978 container died 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4c64650f39446626df1afdfffb8e2d9b33aad66a2db29c0d0d5f5ecf4222411-merged.mount: Deactivated successfully.
Dec  3 02:15:13 compute-0 podman[442697]: 2025-12-03 02:15:13.422970775 +0000 UTC m=+1.811555984 container remove 30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:15:13 compute-0 systemd[1]: libpod-conmon-30ba8593a54c5bdee132d221e3ccd7d7ca7a1875358f64bf586f92805217a3d1.scope: Deactivated successfully.
Dec  3 02:15:13 compute-0 nova_compute[351485]: 2025-12-03 02:15:13.514 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.607 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.609 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.609 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.653981504 +0000 UTC m=+0.096404448 container create 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.610318309 +0000 UTC m=+0.052741313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:15:14 compute-0 systemd[1]: Started libpod-conmon-51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9.scope.
Dec  3 02:15:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.786443482 +0000 UTC m=+0.228866406 container init 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.797910897 +0000 UTC m=+0.240333801 container start 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:15:14 compute-0 podman[442911]: 2025-12-03 02:15:14.802382563 +0000 UTC m=+0.244805487 container attach 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:15:14 compute-0 flamboyant_antonelli[442928]: 167 167
Dec  3 02:15:14 compute-0 systemd[1]: libpod-51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9.scope: Deactivated successfully.
Dec  3 02:15:14 compute-0 podman[442952]: 2025-12-03 02:15:14.8761291 +0000 UTC m=+0.041809514 container died 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:15:14 compute-0 nova_compute[351485]: 2025-12-03 02:15:14.913 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a0e97045ddbd6e16e9fb1fe7e16e6366351a1e04be6f530c683b6bfd1390418-merged.mount: Deactivated successfully.
Dec  3 02:15:14 compute-0 podman[442952]: 2025-12-03 02:15:14.949220258 +0000 UTC m=+0.114900642 container remove 51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:15:14 compute-0 systemd[1]: libpod-conmon-51f0bc3bca80b8d8bf01900cf936e0aefbad70b92660e1d59aed31ff0d143ec9.scope: Deactivated successfully.
Dec  3 02:15:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:15:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1373845775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.154 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.213787873 +0000 UTC m=+0.087110185 container create 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.190263148 +0000 UTC m=+0.063585510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:15:15 compute-0 systemd[1]: Started libpod-conmon-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope.
Dec  3 02:15:15 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.432589364 +0000 UTC m=+0.305911736 container init 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.45685134 +0000 UTC m=+0.330173642 container start 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 02:15:15 compute-0 podman[442971]: 2025-12-03 02:15:15.461858812 +0000 UTC m=+0.335181224 container attach 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.694 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.697 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4106MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.698 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.699 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.794 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.795 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.814 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.854 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.854 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.875 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.903 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:15:15 compute-0 nova_compute[351485]: 2025-12-03 02:15:15.922 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]: {
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:    "0": [
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:        {
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "devices": [
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "/dev/loop3"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            ],
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_name": "ceph_lv0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_size": "21470642176",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "name": "ceph_lv0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "tags": {
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cluster_name": "ceph",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.crush_device_class": "",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.encrypted": "0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osd_id": "0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.type": "block",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.vdo": "0"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            },
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "type": "block",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "vg_name": "ceph_vg0"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:        }
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:    ],
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:    "1": [
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:        {
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "devices": [
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "/dev/loop4"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            ],
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_name": "ceph_lv1",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_size": "21470642176",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "name": "ceph_lv1",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "tags": {
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cluster_name": "ceph",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.crush_device_class": "",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.encrypted": "0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osd_id": "1",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.type": "block",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.vdo": "0"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            },
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "type": "block",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "vg_name": "ceph_vg1"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:        }
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:    ],
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:    "2": [
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:        {
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "devices": [
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "/dev/loop5"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            ],
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_name": "ceph_lv2",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_size": "21470642176",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "name": "ceph_lv2",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "tags": {
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.cluster_name": "ceph",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.crush_device_class": "",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.encrypted": "0",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osd_id": "2",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.type": "block",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:                "ceph.vdo": "0"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            },
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "type": "block",
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:            "vg_name": "ceph_vg2"
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:        }
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]:    ]
Dec  3 02:15:16 compute-0 infallible_dewdney[442988]: }
Dec  3 02:15:16 compute-0 systemd[1]: libpod-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope: Deactivated successfully.
Dec  3 02:15:16 compute-0 conmon[442988]: conmon 17d95e41e58a53e97481 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope/container/memory.events
Dec  3 02:15:16 compute-0 podman[442971]: 2025-12-03 02:15:16.315174475 +0000 UTC m=+1.188496817 container died 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f7996450ab6a701ca71eb200519f84c35fa6c70d8d188108a234c0868ff6617-merged.mount: Deactivated successfully.
Dec  3 02:15:16 compute-0 podman[442971]: 2025-12-03 02:15:16.420238867 +0000 UTC m=+1.293561179 container remove 17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dewdney, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:15:16 compute-0 systemd[1]: libpod-conmon-17d95e41e58a53e974812b4fec5415d7dfa2a93f659800a54da3873213a277d2.scope: Deactivated successfully.
Dec  3 02:15:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:15:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460498832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.513 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.527 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.546 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.549 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.550 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:16 compute-0 nova_compute[351485]: 2025-12-03 02:15:16.910 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:17 compute-0 podman[443135]: 2025-12-03 02:15:17.165442761 +0000 UTC m=+0.108837311 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 02:15:17 compute-0 podman[443136]: 2025-12-03 02:15:17.170231906 +0000 UTC m=+0.111330821 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 02:15:17 compute-0 podman[443134]: 2025-12-03 02:15:17.17142585 +0000 UTC m=+0.116997161 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:15:17 compute-0 podman[443133]: 2025-12-03 02:15:17.174553778 +0000 UTC m=+0.132626393 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 02:15:17 compute-0 podman[443132]: 2025-12-03 02:15:17.220982012 +0000 UTC m=+0.184769739 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.551 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.551 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.552 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.564622395 +0000 UTC m=+0.104367114 container create e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.569 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.569 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:17 compute-0 nova_compute[351485]: 2025-12-03 02:15:17.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.51887813 +0000 UTC m=+0.058622849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:15:17 compute-0 systemd[1]: Started libpod-conmon-e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6.scope.
Dec  3 02:15:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.697446003 +0000 UTC m=+0.237190732 container init e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.715500913 +0000 UTC m=+0.255245582 container start e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 02:15:17 compute-0 podman[443271]: 2025-12-03 02:15:17.722607874 +0000 UTC m=+0.262352593 container attach e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:15:17 compute-0 confident_payne[443287]: 167 167
Dec  3 02:15:17 compute-0 systemd[1]: libpod-e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6.scope: Deactivated successfully.
Dec  3 02:15:17 compute-0 podman[443292]: 2025-12-03 02:15:17.812049625 +0000 UTC m=+0.058408543 container died e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 02:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f62d4038ddeaf9d7b68ed037258ce9350a3d2ea60aa226c4517b9c1dbfbbb7c-merged.mount: Deactivated successfully.
Dec  3 02:15:17 compute-0 podman[443292]: 2025-12-03 02:15:17.884832164 +0000 UTC m=+0.131191082 container remove e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:15:17 compute-0 systemd[1]: libpod-conmon-e8236ae066edc2cc51952c46c9d433606c1ed9e8d3858dd36785e7c8f83b3df6.scope: Deactivated successfully.
Dec  3 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.197803329 +0000 UTC m=+0.097580412 container create 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.164854517 +0000 UTC m=+0.064631640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:15:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:18 compute-0 systemd[1]: Started libpod-conmon-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope.
Dec  3 02:15:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.394265858 +0000 UTC m=+0.294042981 container init 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.410713333 +0000 UTC m=+0.310490416 container start 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:15:18 compute-0 podman[443315]: 2025-12-03 02:15:18.416411884 +0000 UTC m=+0.316188987 container attach 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:15:18 compute-0 nova_compute[351485]: 2025-12-03 02:15:18.517 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:19 compute-0 nova_compute[351485]: 2025-12-03 02:15:19.150 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.510 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.511 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.511 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.514 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.518 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.520 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.524 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:15:19.525 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]: {
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "osd_id": 2,
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "type": "bluestore"
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:    },
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "osd_id": 1,
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "type": "bluestore"
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:    },
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "osd_id": 0,
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:        "type": "bluestore"
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]:    }
Dec  3 02:15:19 compute-0 quizzical_cartwright[443331]: }
Dec  3 02:15:19 compute-0 systemd[1]: libpod-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope: Deactivated successfully.
Dec  3 02:15:19 compute-0 systemd[1]: libpod-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope: Consumed 1.235s CPU time.
Dec  3 02:15:19 compute-0 podman[443315]: 2025-12-03 02:15:19.653633999 +0000 UTC m=+1.553411082 container died 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 02:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fcfdd4accaa775e9fcfaac982e7812b474e8176ba1ecec2269f6905ba4f8b16-merged.mount: Deactivated successfully.
Dec  3 02:15:19 compute-0 podman[443315]: 2025-12-03 02:15:19.744287284 +0000 UTC m=+1.644064327 container remove 99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:15:19 compute-0 systemd[1]: libpod-conmon-99d8777efbbec0b967606b31131e017dec904fbd465feed679bed380af34907e.scope: Deactivated successfully.
Dec  3 02:15:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:15:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:15:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:15:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:15:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fe2fbd03-9e6e-46e5-90a2-faceb23996fa does not exist
Dec  3 02:15:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6f2c6e30-cc47-4de5-8392-a6c78903b1f8 does not exist
Dec  3 02:15:19 compute-0 nova_compute[351485]: 2025-12-03 02:15:19.917 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:15:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:15:20 compute-0 nova_compute[351485]: 2025-12-03 02:15:20.590 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:20 compute-0 nova_compute[351485]: 2025-12-03 02:15:20.885 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:22 compute-0 nova_compute[351485]: 2025-12-03 02:15:22.795 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.123 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.500 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.519 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:23 compute-0 nova_compute[351485]: 2025-12-03 02:15:23.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:24 compute-0 nova_compute[351485]: 2025-12-03 02:15:24.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:24 compute-0 nova_compute[351485]: 2025-12-03 02:15:24.922 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:26 compute-0 nova_compute[351485]: 2025-12-03 02:15:26.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:15:28
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'images', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root']
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:15:28 compute-0 nova_compute[351485]: 2025-12-03 02:15:28.523 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:15:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:15:29 compute-0 nova_compute[351485]: 2025-12-03 02:15:29.224 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:29 compute-0 nova_compute[351485]: 2025-12-03 02:15:29.275 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:29 compute-0 podman[158098]: time="2025-12-03T02:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:15:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:15:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8168 "" "Go-http-client/1.1"
Dec  3 02:15:29 compute-0 nova_compute[351485]: 2025-12-03 02:15:29.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:15:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:15:31 compute-0 openstack_network_exporter[368278]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:15:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:15:32 compute-0 nova_compute[351485]: 2025-12-03 02:15:32.413 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:32 compute-0 nova_compute[351485]: 2025-12-03 02:15:32.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:15:32 compute-0 nova_compute[351485]: 2025-12-03 02:15:32.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:15:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:33 compute-0 nova_compute[351485]: 2025-12-03 02:15:33.526 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.335 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.513 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.514 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.548 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.714 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.715 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.732 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.733 351492 INFO nova.compute.claims [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.862 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:34 compute-0 nova_compute[351485]: 2025-12-03 02:15:34.929 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:15:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748561572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.376 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.390 351492 DEBUG nova.compute.provider_tree [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.415 351492 DEBUG nova.scheduler.client.report [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.474 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.476 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.540 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.541 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.566 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.590 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.722 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.725 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.726 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Creating image(s)#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.777 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.834 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.894 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.907 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.908 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:35 compute-0 nova_compute[351485]: 2025-12-03 02:15:35.916 351492 DEBUG nova.policy [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '08c7d81f1f9e4989b1eb8b8cf96bbf11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9efdda7cf984595a9c5a855bae62b0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:15:36 compute-0 nova_compute[351485]: 2025-12-03 02:15:36.497 351492 DEBUG nova.virt.libvirt.imagebackend [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/ef773cba-72f0-486f-b5e5-792ff26bb688/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/ef773cba-72f0-486f-b5e5-792ff26bb688/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 02:15:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:37 compute-0 nova_compute[351485]: 2025-12-03 02:15:37.204 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Successfully created port: b7fa8023-e50c-4bea-be79-8fbe005f0b8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:15:37 compute-0 podman[443507]: 2025-12-03 02:15:37.877017771 +0000 UTC m=+0.115006605 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:15:37 compute-0 podman[443505]: 2025-12-03 02:15:37.877941177 +0000 UTC m=+0.135821814 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  3 02:15:37 compute-0 podman[443506]: 2025-12-03 02:15:37.901896295 +0000 UTC m=+0.146067944 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.024 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.125 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.127 351492 DEBUG nova.virt.images [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] ef773cba-72f0-486f-b5e5-792ff26bb688 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.130 351492 DEBUG nova.privsep.utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.131 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.388 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.part /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.396 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.439 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.440 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.476 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.496 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601.converted --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.498 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.538 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.550 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 4f50e501-f565-4e1f-aa02-df921702eff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.583 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.622 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.623 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.634 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.635 351492 INFO nova.compute.claims [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:15:38 compute-0 nova_compute[351485]: 2025-12-03 02:15:38.810 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.029 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 4f50e501-f565-4e1f-aa02-df921702eff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.154 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] resizing rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:15:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:15:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810822016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.401 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.421 351492 DEBUG nova.objects.instance [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'migration_context' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.430 351492 DEBUG nova.compute.provider_tree [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.455 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.456 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Ensure instance console log exists: /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.456 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.457 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.457 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.459 351492 DEBUG nova.scheduler.client.report [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.492 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.493 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.545 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.546 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.567 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.591 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.691 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.694 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.695 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Creating image(s)#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.749 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.812 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.880 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.890 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.938 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.987 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.987 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.988 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:39 compute-0 nova_compute[351485]: 2025-12-03 02:15:39.989 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.033 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.042 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 07ce21e6-3627-467a-9b7e-d9045308576c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.156 351492 DEBUG nova.policy [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a7f624afcf845f786397f8aa1bb2a63', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a1cf3657daa4d798d912ceaae049aa0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.352 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.353 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.355 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.498 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 07ce21e6-3627-467a-9b7e-d9045308576c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.548 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:15:40 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:40.551 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.689 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] resizing rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.896 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Successfully updated port: b7fa8023-e50c-4bea-be79-8fbe005f0b8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.921 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.921 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:40 compute-0 nova_compute[351485]: 2025-12-03 02:15:40.922 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:15:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 30 op/s
Dec  3 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.120 351492 DEBUG nova.objects.instance [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lazy-loading 'migration_context' on Instance uuid 07ce21e6-3627-467a-9b7e-d9045308576c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.139 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.139 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Ensure instance console log exists: /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.140 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.141 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.141 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:41 compute-0 nova_compute[351485]: 2025-12-03 02:15:41.637 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.916 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Successfully created port: 5009f27c-5ce3-46eb-b7aa-e82645a3097e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.971 351492 DEBUG nova.compute.manager [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.972 351492 DEBUG nova.compute.manager [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:15:42 compute-0 nova_compute[351485]: 2025-12-03 02:15:42.973 351492 DEBUG oslo_concurrency.lockutils [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 115 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 33 op/s
Dec  3 02:15:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.407 351492 DEBUG nova.network.neutron [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.441 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.442 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance network_info: |[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.443 351492 DEBUG oslo_concurrency.lockutils [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.444 351492 DEBUG nova.network.neutron [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.450 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start _get_guest_xml network_info=[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.464 351492 WARNING nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.479 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.481 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.488 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.489 351492 DEBUG nova.virt.libvirt.host [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.490 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.491 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.492 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.492 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.493 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.494 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.494 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.495 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.496 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.496 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.497 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.498 351492 DEBUG nova.virt.hardware [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.503 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.533 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.578 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.579 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.604 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.702 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.703 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.714 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.715 351492 INFO nova.compute.claims [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:15:43 compute-0 podman[443894]: 2025-12-03 02:15:43.886474544 +0000 UTC m=+0.136401230 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  3 02:15:43 compute-0 nova_compute[351485]: 2025-12-03 02:15:43.897 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835045495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.065 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.120 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.131 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.408 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Successfully updated port: 5009f27c-5ce3-46eb-b7aa-e82645a3097e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:15:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:15:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278663102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.435 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.436 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquired lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.436 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.447 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.472 351492 DEBUG nova.compute.provider_tree [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.492 351492 DEBUG nova.scheduler.client.report [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.521 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.521 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.576 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.577 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:15:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028958713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.601 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.623 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.640 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.643 351492 DEBUG nova.virt.libvirt.vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1950125250',display_name='tempest-AttachInterfacesUnderV243Test-server-1950125250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1950125250',id=6,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBB9OuHdIBdpYaktjGsefgccfH8R9SNK99mHHbJQ9rg+G2U1LTvmjO9Wsnt6ghp9uwnzyNl9odxW0s4EjHMYofeke7VnvOokwl4rSnaOh/gTQhB30j9Q5ponmvnWGOY9dA==',key_name='tempest-keypair-48380121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9efdda7cf984595a9c5a855bae62b0e',ramdisk_id='',reservation_id='r-dnx5z6kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1651825730',owner_user_name='tempest-AttachInterfacesUnderV243Test-1651825730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c7d81f1f9e4989b1eb8b8cf96bbf11',uuid=4f50e501-f565-4e1f-aa02-df921702eff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.644 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converting VIF {"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.646 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.649 351492 DEBUG nova.objects.instance [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.673 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <uuid>4f50e501-f565-4e1f-aa02-df921702eff9</uuid>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <name>instance-00000006</name>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1950125250</nova:name>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:15:43</nova:creationTime>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:user uuid="08c7d81f1f9e4989b1eb8b8cf96bbf11">tempest-AttachInterfacesUnderV243Test-1651825730-project-member</nova:user>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:project uuid="a9efdda7cf984595a9c5a855bae62b0e">tempest-AttachInterfacesUnderV243Test-1651825730</nova:project>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <nova:port uuid="b7fa8023-e50c-4bea-be79-8fbe005f0b8a">
Dec  3 02:15:44 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <system>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <entry name="serial">4f50e501-f565-4e1f-aa02-df921702eff9</entry>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <entry name="uuid">4f50e501-f565-4e1f-aa02-df921702eff9</entry>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </system>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <os>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  </os>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <features>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  </features>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/4f50e501-f565-4e1f-aa02-df921702eff9_disk">
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/4f50e501-f565-4e1f-aa02-df921702eff9_disk.config">
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:44 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:12:b3:fa"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <target dev="tapb7fa8023-e5"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/console.log" append="off"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <video>
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </video>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:15:44 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:15:44 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:15:44 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:15:44 compute-0 nova_compute[351485]: </domain>
Dec  3 02:15:44 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.675 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Preparing to wait for external event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.675 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.676 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.676 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.677 351492 DEBUG nova.virt.libvirt.vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1950125250',display_name='tempest-AttachInterfacesUnderV243Test-server-1950125250',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1950125250',id=6,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBB9OuHdIBdpYaktjGsefgccfH8R9SNK99mHHbJQ9rg+G2U1LTvmjO9Wsnt6ghp9uwnzyNl9odxW0s4EjHMYofeke7VnvOokwl4rSnaOh/gTQhB30j9Q5ponmvnWGOY9dA==',key_name='tempest-keypair-48380121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9efdda7cf984595a9c5a855bae62b0e',ramdisk_id='',reservation_id='r-dnx5z6kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1651825730',owner_user_name='tempest-AttachInterfacesUnderV243Test-1651825730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c7d81f1f9e4989b1eb8b8cf96bbf11',uuid=4f50e501-f565-4e1f-aa02-df921702eff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.678 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converting VIF {"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.679 351492 DEBUG nova.network.os_vif_util [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.680 351492 DEBUG os_vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.681 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.682 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.683 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.693 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7fa8023-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.694 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7fa8023-e5, col_values=(('external_ids', {'iface-id': 'b7fa8023-e50c-4bea-be79-8fbe005f0b8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:b3:fa', 'vm-uuid': '4f50e501-f565-4e1f-aa02-df921702eff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.696 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:44 compute-0 NetworkManager[48912]: <info>  [1764728144.6985] manager: (tapb7fa8023-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.699 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.711 351492 INFO os_vif [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5')#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.775 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.777 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.777 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Creating image(s)#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.830 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.890 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.951 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:44 compute-0 nova_compute[351485]: 2025-12-03 02:15:44.972 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 123 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 36 op/s
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.014 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.031 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.032 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.033 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] No VIF found with MAC fa:16:3e:12:b3:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.034 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Using config drive#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.077 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.089 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.090 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.090 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.091 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.126 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.136 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 a48b4084-369d-432a-9f47-9378cdcc011f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.169 351492 DEBUG nova.policy [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '292dd1da4e67424b855327b32f0623b7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.549 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 a48b4084-369d-432a-9f47-9378cdcc011f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:45 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:45.555 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.724 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] resizing rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.867 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Creating config drive at /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config#033[00m
Dec  3 02:15:45 compute-0 nova_compute[351485]: 2025-12-03 02:15:45.881 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo0mbnonu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.010 351492 DEBUG nova.objects.instance [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'migration_context' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.030 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.030 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Ensure instance console log exists: /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.032 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.032 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.033 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.037 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo0mbnonu" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.088 351492 DEBUG nova.storage.rbd_utils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] rbd image 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.100 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.304 351492 DEBUG nova.network.neutron [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.306 351492 DEBUG nova.network.neutron [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.330 351492 DEBUG oslo_concurrency.lockutils [req-e8ba8ab5-55a9-4b09-90de-02681036b5df req-456a9d1c-60e5-407d-9d4b-a1568c2e0216 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.367 351492 DEBUG oslo_concurrency.processutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config 4f50e501-f565-4e1f-aa02-df921702eff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.368 351492 INFO nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deleting local config drive /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9/disk.config because it was imported into RBD.#033[00m
Dec  3 02:15:46 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 02:15:46 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.453 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Successfully created port: ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:15:46 compute-0 kernel: tapb7fa8023-e5: entered promiscuous mode
Dec  3 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00066|binding|INFO|Claiming lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a for this chassis.
Dec  3 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00067|binding|INFO|b7fa8023-e50c-4bea-be79-8fbe005f0b8a: Claiming fa:16:3e:12:b3:fa 10.100.0.3
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.529 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.5345] manager: (tapb7fa8023-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.546 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:b3:fa 10.100.0.3'], port_security=['fa:16:3e:12:b3:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4f50e501-f565-4e1f-aa02-df921702eff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9efdda7cf984595a9c5a855bae62b0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '532f80d5-065d-43cb-9604-ad1c2a6e3902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=319776e3-1c91-4ec0-bfb2-2325dfaa1fa2, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=b7fa8023-e50c-4bea-be79-8fbe005f0b8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.549 288528 INFO neutron.agent.ovn.metadata.agent [-] Port b7fa8023-e50c-4bea-be79-8fbe005f0b8a in datapath a5e23dc0-bcc2-406c-bc7f-b978295be94b bound to our chassis#033[00m
Dec  3 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00068|binding|INFO|Setting lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a ovn-installed in OVS
Dec  3 02:15:46 compute-0 ovn_controller[89134]: 2025-12-03T02:15:46Z|00069|binding|INFO|Setting lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a up in Southbound
Dec  3 02:15:46 compute-0 nova_compute[351485]: 2025-12-03 02:15:46.554 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.554 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a5e23dc0-bcc2-406c-bc7f-b978295be94b#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.573 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d35b86a2-1fb4-45e4-ad21-cf848666c3f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.574 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa5e23dc0-b1 in ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.577 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa5e23dc0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.577 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[94c29a11-82ae-411c-b460-0999f40c1303]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.578 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cef05539-2a64-4017-ba8b-a417433468e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 systemd-machined[138558]: New machine qemu-6-instance-00000006.
Dec  3 02:15:46 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.598 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[2c65b49c-c19c-4b55-a601-8215211c2392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.633 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ec71ec43-500c-406b-9783-eebc2f172322]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 systemd-udevd[444239]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.680 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4adf057c-1283-4b1b-9a83-216872fa8a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.6848] device (tapb7fa8023-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.6862] device (tapb7fa8023-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.6914] manager: (tapa5e23dc0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.690 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c54239-df22-4fdd-97fd-9138f71e7ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.737 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[72c8581f-5f3c-4f47-8013-0cb40681d284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.743 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2a2949-2e39-4ed1-a00e-ebb66dcba907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 NetworkManager[48912]: <info>  [1764728146.7734] device (tapa5e23dc0-b0): carrier: link connected
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.778 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4726dc-04b8-4ef0-b85a-59e2944531ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.804 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[38a989df-1d46-4f71-a65b-a88fa3989966]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5e23dc0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:e2:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698625, 'reachable_time': 17261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 444268, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.827 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fb417c2f-f7f4-4078-9923-4c582d39ba54]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:e260'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698625, 'tstamp': 698625}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 444269, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.860 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5ab2c4-798f-450c-95c6-532ad7957ca0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5e23dc0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:e2:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698625, 'reachable_time': 17261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 444270, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:46.915 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbd1a6a-5dff-441f-8b94-99b7e97669aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 165 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 65 op/s
Dec  3 02:15:47 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.014 351492 DEBUG nova.network.neutron [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.018 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0c94dbc1-9c84-4b1b-9417-e1a910095e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.020 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5e23dc0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.021 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.024 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.021 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5e23dc0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:47 compute-0 kernel: tapa5e23dc0-b0: entered promiscuous mode
Dec  3 02:15:47 compute-0 NetworkManager[48912]: <info>  [1764728147.0255] manager: (tapa5e23dc0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.031 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa5e23dc0-b0, col_values=(('external_ids', {'iface-id': 'f4f388aa-0af5-4918-b8ad-5c74c22057c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.032 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:47 compute-0 ovn_controller[89134]: 2025-12-03T02:15:47Z|00070|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.052 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Releasing lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.052 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance network_info: |[{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.054 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start _get_guest_xml network_info=[{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.061 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.064 351492 WARNING nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.064 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a5e23dc0-bcc2-406c-bc7f-b978295be94b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a5e23dc0-bcc2-406c-bc7f-b978295be94b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.066 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0ac628-ca91-44c2-90db-82722b23cad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.067 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-a5e23dc0-bcc2-406c-bc7f-b978295be94b
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/a5e23dc0-bcc2-406c-bc7f-b978295be94b.pid.haproxy
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID a5e23dc0-bcc2-406c-bc7f-b978295be94b
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:15:47 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:47.068 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'env', 'PROCESS_TAG=haproxy-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a5e23dc0-bcc2-406c-bc7f-b978295be94b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:15:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.071 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.072 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.077 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.077 351492 DEBUG nova.virt.libvirt.host [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.078 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.078 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.078 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.079 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.080 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.080 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.080 351492 DEBUG nova.virt.hardware [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.082 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:15:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640582546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:15:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:15:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640582546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.464 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728147.4639695, 4f50e501-f565-4e1f-aa02-df921702eff9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.465 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Started (Lifecycle Event)#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.492 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.493 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.493 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.500 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728147.47037, 4f50e501-f565-4e1f-aa02-df921702eff9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.501 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.516 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.520 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.529 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:15:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533004978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.628 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.632361847 +0000 UTC m=+0.099374873 container create 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.594409703 +0000 UTC m=+0.061422709 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:15:47 compute-0 systemd[1]: Started libpod-conmon-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363.scope.
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.701 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d0f5e97a1c9cf6a7b1ce8133ccb65b7a2748d41d5e4c00f49714ed27a9e8b68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.759 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.775038394 +0000 UTC m=+0.242051400 container init 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 02:15:47 compute-0 podman[444383]: 2025-12-03 02:15:47.782181226 +0000 UTC m=+0.249194202 container start 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.784 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:15:47 compute-0 podman[444417]: 2025-12-03 02:15:47.791431357 +0000 UTC m=+0.085651224 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:15:47 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : New worker (444494) forked
Dec  3 02:15:47 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : Loading success.
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.821 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.821 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.831 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:15:47 compute-0 nova_compute[351485]: 2025-12-03 02:15:47.831 351492 INFO nova.compute.claims [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:15:47 compute-0 podman[444418]: 2025-12-03 02:15:47.852122025 +0000 UTC m=+0.148427311 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Dec  3 02:15:47 compute-0 podman[444416]: 2025-12-03 02:15:47.852831585 +0000 UTC m=+0.140112316 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec  3 02:15:47 compute-0 podman[444411]: 2025-12-03 02:15:47.871486202 +0000 UTC m=+0.177915634 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 02:15:47 compute-0 podman[444420]: 2025-12-03 02:15:47.87494478 +0000 UTC m=+0.166998196 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125)
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.032 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.112 351492 DEBUG nova.compute.manager [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.114 351492 DEBUG nova.compute.manager [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing instance network info cache due to event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.115 351492 DEBUG oslo_concurrency.lockutils [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.115 351492 DEBUG oslo_concurrency.lockutils [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.116 351492 DEBUG nova.network.neutron [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.174 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Successfully updated port: ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.198 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.200 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.201 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:15:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2582215624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.285 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.291 351492 DEBUG nova.virt.libvirt.vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1673813976',display_name='tempest-ServersTestJSON-server-1673813976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1673813976',id=7,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJYX2+s+Cn7+6pt2DjGw9oFEuqJNIKKTlZXH+fYJLmbL39TCISRXMer1dBsYcpnaM6SERWPVMBKkG2FwLQyhKQV9uLnyTX7LXwX8AMU3L/hKCWN57p10Cgl0YPkCXm4JFA==',key_name='tempest-keypair-555022383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a1cf3657daa4d798d912ceaae049aa0',ramdisk_id='',reservation_id='r-cpufgz7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-263993337',owner_user_name='tempest-ServersTestJSON-263993337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7f624afcf845f786397f8aa1bb2a63',uuid=07ce21e6-3627-467a-9b7e-d9045308576c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.293 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converting VIF {"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.296 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.299 351492 DEBUG nova.objects.instance [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 07ce21e6-3627-467a-9b7e-d9045308576c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.323 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <uuid>07ce21e6-3627-467a-9b7e-d9045308576c</uuid>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <name>instance-00000007</name>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <nova:name>tempest-ServersTestJSON-server-1673813976</nova:name>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:15:47</nova:creationTime>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:user uuid="8a7f624afcf845f786397f8aa1bb2a63">tempest-ServersTestJSON-263993337-project-member</nova:user>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:project uuid="5a1cf3657daa4d798d912ceaae049aa0">tempest-ServersTestJSON-263993337</nova:project>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <nova:port uuid="5009f27c-5ce3-46eb-b7aa-e82645a3097e">
Dec  3 02:15:48 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <system>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <entry name="serial">07ce21e6-3627-467a-9b7e-d9045308576c</entry>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <entry name="uuid">07ce21e6-3627-467a-9b7e-d9045308576c</entry>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </system>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <os>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  </os>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <features>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  </features>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/07ce21e6-3627-467a-9b7e-d9045308576c_disk">
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/07ce21e6-3627-467a-9b7e-d9045308576c_disk.config">
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:48 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:3a:ad:09"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <target dev="tap5009f27c-5c"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/console.log" append="off"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <video>
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </video>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:15:48 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:15:48 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:15:48 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:15:48 compute-0 nova_compute[351485]: </domain>
Dec  3 02:15:48 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.339 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Preparing to wait for external event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.339 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.340 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.340 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.341 351492 DEBUG nova.virt.libvirt.vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1673813976',display_name='tempest-ServersTestJSON-server-1673813976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1673813976',id=7,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJYX2+s+Cn7+6pt2DjGw9oFEuqJNIKKTlZXH+fYJLmbL39TCISRXMer1dBsYcpnaM6SERWPVMBKkG2FwLQyhKQV9uLnyTX7LXwX8AMU3L/hKCWN57p10Cgl0YPkCXm4JFA==',key_name='tempest-keypair-555022383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a1cf3657daa4d798d912ceaae049aa0',ramdisk_id='',reservation_id='r-cpufgz7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-263993337',owner_user_name='tempest-ServersTestJSON-263993337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7f624afcf845f786397f8aa1bb2a63',uuid=07ce21e6-3627-467a-9b7e-d9045308576c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.341 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converting VIF {"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.343 351492 DEBUG nova.network.os_vif_util [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.343 351492 DEBUG os_vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.344 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.344 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.345 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.350 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.350 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5009f27c-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.351 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5009f27c-5c, col_values=(('external_ids', {'iface-id': '5009f27c-5ce3-46eb-b7aa-e82645a3097e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:ad:09', 'vm-uuid': '07ce21e6-3627-467a-9b7e-d9045308576c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:48 compute-0 NetworkManager[48912]: <info>  [1764728148.3551] manager: (tap5009f27c-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.357 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.368 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.370 351492 INFO os_vif [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c')#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.435 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.438 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.439 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] No VIF found with MAC fa:16:3e:3a:ad:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.440 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Using config drive#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.481 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.494 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:15:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482587053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.616 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.629 351492 DEBUG nova.compute.provider_tree [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.656 351492 DEBUG nova.scheduler.client.report [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.685 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.686 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.740 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.741 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.760 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.780 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.885 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.888 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.889 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Creating image(s)#033[00m
Dec  3 02:15:48 compute-0 nova_compute[351485]: 2025-12-03 02:15:48.940 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 165 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 65 op/s
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.003 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.061 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.075 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.107 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Creating config drive at /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.117 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xwj8d11 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.151 351492 DEBUG nova.policy [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4dc5f09973d5430fb9d8106a1a0a2479', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5875dd9a17274c38a2ae81fb3759558e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.160 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.161 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.162 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.162 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.206 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.213 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 5c870f25-6c33-4e95-b540-5a806454f556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.270 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xwj8d11" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.319 351492 DEBUG nova.storage.rbd_utils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] rbd image 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.345 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.656 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 5c870f25-6c33-4e95-b540-5a806454f556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.704 351492 DEBUG oslo_concurrency.processutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config 07ce21e6-3627-467a-9b7e-d9045308576c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.704 351492 INFO nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deleting local config drive /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c/disk.config because it was imported into RBD.#033[00m
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.772 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] resizing rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:15:49 compute-0 kernel: tap5009f27c-5c: entered promiscuous mode
Dec  3 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.7963] manager: (tap5009f27c-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Dec  3 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00071|binding|INFO|Claiming lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e for this chassis.
Dec  3 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00072|binding|INFO|5009f27c-5ce3-46eb-b7aa-e82645a3097e: Claiming fa:16:3e:3a:ad:09 10.100.0.10
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.817 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:ad:09 10.100.0.10'], port_security=['fa:16:3e:3a:ad:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07ce21e6-3627-467a-9b7e-d9045308576c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a1cf3657daa4d798d912ceaae049aa0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3e8f04e-3c5d-406e-b48c-aa69bd7ba1c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427d4c89-de71-4fff-872a-bb6406d77b1e, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=5009f27c-5ce3-46eb-b7aa-e82645a3097e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.820 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 5009f27c-5ce3-46eb-b7aa-e82645a3097e in datapath 9f9dd264-e73a-4200-ba74-0833c40bd14c bound to our chassis#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.823 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f9dd264-e73a-4200-ba74-0833c40bd14c#033[00m
Dec  3 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00073|binding|INFO|Setting lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e ovn-installed in OVS
Dec  3 02:15:49 compute-0 ovn_controller[89134]: 2025-12-03T02:15:49Z|00074|binding|INFO|Setting lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e up in Southbound
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.838 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2ceca406-2550-40a1-81b6-329da961146d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.840 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f9dd264-e1 in ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.841 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f9dd264-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.842 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cc16ac5e-1226-46d2-8e52-fae3c929f2b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.844 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[410cc115-4f5f-4290-b348-3e872202a046]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:49 compute-0 systemd-udevd[444800]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:15:49 compute-0 systemd-machined[138558]: New machine qemu-7-instance-00000007.
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.867 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[cca22ffe-0c30-48bb-b09e-6e07e4e9164c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.8719] device (tap5009f27c-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:15:49 compute-0 nova_compute[351485]: 2025-12-03 02:15:49.872 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:49 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  3 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.8757] device (tap5009f27c-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.908 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4b681095-2aa2-42ba-95fb-2b1a98d82650]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.953 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c886956f-ea74-4412-9cec-030e7b3ae07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:49.965 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5475c0-6eea-4d04-a577-9fbf48844440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:49 compute-0 NetworkManager[48912]: <info>  [1764728149.9676] manager: (tap9f9dd264-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.017 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0cdcbaf1-87c9-4429-83f8-bb2b13111be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.021 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb38c84-d41a-47ea-806c-dc162687857b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.023 351492 DEBUG nova.objects.instance [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lazy-loading 'migration_context' on Instance uuid 5c870f25-6c33-4e95-b540-5a806454f556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.043 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.044 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Ensure instance console log exists: /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.045 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.045 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.046 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:50 compute-0 NetworkManager[48912]: <info>  [1764728150.0602] device (tap9f9dd264-e0): carrier: link connected
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.071 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[56d46a39-133b-4293-b407-da611f968970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.092 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2c749674-9692-4f32-8d67-ddaa45c102a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f9dd264-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:07:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698953, 'reachable_time': 24982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 444850, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.130 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[51a51c17-7d2c-42ad-adc3-945e83757cee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:719'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698953, 'tstamp': 698953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 444851, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.157 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aec407-61ce-4647-a085-b3d93d68509b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f9dd264-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:07:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698953, 'reachable_time': 24982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 444852, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.202 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d63287b9-ea12-4711-ad1b-ef3e3351666c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.293 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c436adbb-4276-4784-afba-e6f82ebba8eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.296 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f9dd264-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.303 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.304 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f9dd264-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.308 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:50 compute-0 NetworkManager[48912]: <info>  [1764728150.3099] manager: (tap9f9dd264-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  3 02:15:50 compute-0 kernel: tap9f9dd264-e0: entered promiscuous mode
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.318 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.319 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f9dd264-e0, col_values=(('external_ids', {'iface-id': '450cbc12-7d6b-43b0-b43f-cc78dcc16b25'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.321 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:50 compute-0 ovn_controller[89134]: 2025-12-03T02:15:50Z|00075|binding|INFO|Releasing lport 450cbc12-7d6b-43b0-b43f-cc78dcc16b25 from this chassis (sb_readonly=0)
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.351 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f9dd264-e73a-4200-ba74-0833c40bd14c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f9dd264-e73a-4200-ba74-0833c40bd14c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.353 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[58866e01-d737-4a02-a364-85c93a4aa8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.354 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-9f9dd264-e73a-4200-ba74-0833c40bd14c
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/9f9dd264-e73a-4200-ba74-0833c40bd14c.pid.haproxy
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID 9f9dd264-e73a-4200-ba74-0833c40bd14c
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.355 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'env', 'PROCESS_TAG=haproxy-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f9dd264-e73a-4200-ba74-0833c40bd14c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:15:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:50.370 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.467 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728150.4670317, 07ce21e6-3627-467a-9b7e-d9045308576c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.468 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Started (Lifecycle Event)#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.487 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.495 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728150.4672446, 07ce21e6-3627-467a-9b7e-d9045308576c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.495 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.513 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.519 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:15:50 compute-0 nova_compute[351485]: 2025-12-03 02:15:50.543 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:15:50 compute-0 podman[444926]: 2025-12-03 02:15:50.944768194 +0000 UTC m=+0.112119263 container create 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 02:15:50 compute-0 podman[444926]: 2025-12-03 02:15:50.898429243 +0000 UTC m=+0.065780352 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:15:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 218 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.2 MiB/s wr, 120 op/s
Dec  3 02:15:51 compute-0 systemd[1]: Started libpod-conmon-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b.scope.
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.013 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Successfully created port: d7b1b965-f304-40eb-9f34-c63af54da9f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.020 351492 DEBUG nova.network.neutron [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.023 351492 DEBUG nova.network.neutron [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updated VIF entry in instance network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.024 351492 DEBUG nova.network.neutron [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.069 351492 DEBUG oslo_concurrency.lockutils [req-c67eaf89-92dc-4efa-961a-930a221183f1 req-b62ac1f8-ed05-4d21-ae4c-f71e09e76aee 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.070 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.071 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance network_info: |[{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c2115dbbdd79e6878ea3d1b5fd20b2e30c3ab979ab90b0f907915a9dad459d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.076 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start _get_guest_xml network_info=[{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.088 351492 WARNING nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.099 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.100 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:15:51 compute-0 podman[444926]: 2025-12-03 02:15:51.109182496 +0000 UTC m=+0.276533595 container init 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.108 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.109 351492 DEBUG nova.virt.libvirt.host [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.112 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.112 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.113 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.113 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.113 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.114 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.114 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.115 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.115 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.115 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.116 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.116 351492 DEBUG nova.virt.hardware [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.121 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:51 compute-0 podman[444926]: 2025-12-03 02:15:51.124485419 +0000 UTC m=+0.291836478 container start 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:15:51 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : New worker (444946) forked
Dec  3 02:15:51 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : Loading success.
Dec  3 02:15:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/45174843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.616 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.654 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:51 compute-0 nova_compute[351485]: 2025-12-03 02:15:51.662 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:52 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/965893036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.202 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.203 351492 DEBUG nova.virt.libvirt.vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.204 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.205 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.206 351492 DEBUG nova.objects.instance [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.229 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <uuid>a48b4084-369d-432a-9f47-9378cdcc011f</uuid>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <name>instance-00000008</name>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <nova:name>tempest-ServerActionsTestJSON-server-925455337</nova:name>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:15:51</nova:creationTime>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:user uuid="292dd1da4e67424b855327b32f0623b7">tempest-ServerActionsTestJSON-225723275-project-member</nova:user>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:project uuid="b95bb4c57d3543acb25997bedee9dec3">tempest-ServerActionsTestJSON-225723275</nova:project>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <nova:port uuid="ee5c2dfc-04c3-400a-8073-6f2c65dcea03">
Dec  3 02:15:52 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <system>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <entry name="serial">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <entry name="uuid">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </system>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <os>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  </os>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <features>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  </features>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk">
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk.config">
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:52 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:ff:dd:2f"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <target dev="tapee5c2dfc-04"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/console.log" append="off"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <video>
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </video>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:15:52 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:15:52 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:15:52 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:15:52 compute-0 nova_compute[351485]: </domain>
Dec  3 02:15:52 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.230 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Preparing to wait for external event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.230 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.231 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.231 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.231 351492 DEBUG nova.virt.libvirt.vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.232 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.232 351492 DEBUG nova.network.os_vif_util [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.233 351492 DEBUG os_vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.234 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.235 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.235 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.241 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.241 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee5c2dfc-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.242 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee5c2dfc-04, col_values=(('external_ids', {'iface-id': 'ee5c2dfc-04c3-400a-8073-6f2c65dcea03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:dd:2f', 'vm-uuid': 'a48b4084-369d-432a-9f47-9378cdcc011f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:52 compute-0 NetworkManager[48912]: <info>  [1764728152.2447] manager: (tapee5c2dfc-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.246 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.258 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.258 351492 INFO os_vif [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.332 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.333 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.334 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] No VIF found with MAC fa:16:3e:ff:dd:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.335 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Using config drive#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.386 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.544 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.544 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.545 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.545 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.545 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Processing event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.546 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.547 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.547 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.547 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.548 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] No waiting events found dispatching network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.548 351492 WARNING nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received unexpected event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a for instance with vm_state building and task_state spawning.#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.549 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.549 351492 DEBUG nova.compute.manager [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing instance network info cache due to event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.550 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.550 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.551 351492 DEBUG nova.network.neutron [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.554 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.563 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728152.5622337, 4f50e501-f565-4e1f-aa02-df921702eff9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.564 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.566 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.587 351492 INFO nova.virt.libvirt.driver [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance spawned successfully.#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.587 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.594 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.601 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.612 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.612 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.613 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.613 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.613 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.614 351492 DEBUG nova.virt.libvirt.driver [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.624 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.675 351492 INFO nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 16.95 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.676 351492 DEBUG nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.745 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Creating config drive at /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.753 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9acbjlf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.796 351492 INFO nova.compute.manager [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 18.13 seconds to build instance.#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.823 351492 DEBUG oslo_concurrency.lockutils [None req-9aa39c8c-10a6-4cbe-925b-8653e3043137 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.898 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg9acbjlf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.955 351492 DEBUG nova.storage.rbd_utils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] rbd image a48b4084-369d-432a-9f47-9378cdcc011f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.962 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config a48b4084-369d-432a-9f47-9378cdcc011f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:52 compute-0 nova_compute[351485]: 2025-12-03 02:15:52.988 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Successfully updated port: d7b1b965-f304-40eb-9f34-c63af54da9f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:15:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 234 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 746 KiB/s rd, 5.8 MiB/s wr, 98 op/s
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.010 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.011 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquired lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.011 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.188 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.243 351492 DEBUG oslo_concurrency.processutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config a48b4084-369d-432a-9f47-9378cdcc011f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.244 351492 INFO nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deleting local config drive /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/disk.config because it was imported into RBD.#033[00m
Dec  3 02:15:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:53 compute-0 kernel: tapee5c2dfc-04: entered promiscuous mode
Dec  3 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.3376] manager: (tapee5c2dfc-04): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec  3 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00076|binding|INFO|Claiming lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for this chassis.
Dec  3 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00077|binding|INFO|ee5c2dfc-04c3-400a-8073-6f2c65dcea03: Claiming fa:16:3e:ff:dd:2f 10.100.0.9
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.340 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.353 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.355 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a bound to our chassis#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.359 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fdf214a-0f6e-4e5d-b449-e1988827937a#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.374 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[db8df650-f2cf-4bd0-9b3b-65e4b4c3dea0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.375 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fdf214a-01 in ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.379 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fdf214a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.380 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[37dc9de2-4cd9-4473-bbe1-9f20abb3f43a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.381 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[776c8ffc-89e0-4816-a48b-481f1c781dc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00078|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 up in Southbound
Dec  3 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00079|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 ovn-installed in OVS
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.387 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.394 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.405 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a0aa98-960a-4f7b-bbf6-0863e44af025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 systemd-machined[138558]: New machine qemu-8-instance-00000008.
Dec  3 02:15:53 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.432 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[129dc59c-fd10-410f-b62c-1654f4654c31]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 systemd-udevd[445093]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.4556] device (tapee5c2dfc-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.4593] device (tapee5c2dfc-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.484 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[07713f59-3f22-4558-9a4d-12cd3b377d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 systemd-udevd[445097]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.4940] manager: (tap2fdf214a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.496 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f5131873-4b44-4942-a6ee-b39705ba4d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.540 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.541 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[8a01c723-7698-484c-91a8-526fececb319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.557 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[3be02d87-c169-4c6f-97db-21f64352bd77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.5859] device (tap2fdf214a-00): carrier: link connected
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.594 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[746eb6fa-2d2a-45e9-9de5-52afb70257ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.613 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[93bb3556-130b-409e-93c6-fa446b5524c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699306, 'reachable_time': 26989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445123, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.635 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0c099eb5-342f-4fc5-ac35-d77556a4b53d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:62d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699306, 'tstamp': 699306}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445124, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.655 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcf73ea-6e58-4683-be59-d4a85b42c8ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699306, 'reachable_time': 26989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445125, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.704 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[21c88b52-587b-4de7-aab3-6d3719d6f322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.796 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c64fe696-67e0-4b97-845f-e01e30e30ae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.797 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.797 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.798 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fdf214a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:53 compute-0 NetworkManager[48912]: <info>  [1764728153.8004] manager: (tap2fdf214a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.800 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 kernel: tap2fdf214a-00: entered promiscuous mode
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.805 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 ovn_controller[89134]: 2025-12-03T02:15:53Z|00080|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.804 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fdf214a-00, col_values=(('external_ids', {'iface-id': 'c8314dfe-5b76-4819-9b3e-1cb76a272253'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.824 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.827 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:15:53 compute-0 nova_compute[351485]: 2025-12-03 02:15:53.827 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.831 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3c93b3-4359-4add-8bed-571fb440b6fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.831 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID 2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:15:53 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:53.832 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'env', 'PROCESS_TAG=haproxy-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fdf214a-0f6e-4e5d-b449-e1988827937a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.250 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728154.2493758, a48b4084-369d-432a-9f47-9378cdcc011f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.251 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Started (Lifecycle Event)#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.274 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.280 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728154.2498405, a48b4084-369d-432a-9f47-9378cdcc011f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.281 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.303 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.311 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:15:54 compute-0 nova_compute[351485]: 2025-12-03 02:15:54.331 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.371368343 +0000 UTC m=+0.069092736 container create a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.339917663 +0000 UTC m=+0.037642076 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:15:54 compute-0 systemd[1]: Started libpod-conmon-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5.scope.
Dec  3 02:15:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087efa0144787524a70b8446fc5a09fbd51303045924a94f4a2b128c2b8cbdbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.523676842 +0000 UTC m=+0.221401265 container init a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 02:15:54 compute-0 podman[445197]: 2025-12-03 02:15:54.54091817 +0000 UTC m=+0.238642593 container start a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 02:15:54 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : New worker (445218) forked
Dec  3 02:15:54 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : Loading success.
Dec  3 02:15:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 242 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 4.5 MiB/s wr, 107 op/s
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.381 351492 DEBUG nova.compute.manager [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.382 351492 DEBUG nova.compute.manager [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing instance network info cache due to event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.382 351492 DEBUG oslo_concurrency.lockutils [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.499 351492 DEBUG nova.network.neutron [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated VIF entry in instance network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.500 351492 DEBUG nova.network.neutron [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.519 351492 DEBUG oslo_concurrency.lockutils [req-926d41f0-e9e6-497d-b230-713def277069 req-e6ffc61d-f569-443d-9c55-085850e13b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.704 351492 DEBUG nova.network.neutron [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.738 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Releasing lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.738 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance network_info: |[{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.739 351492 DEBUG oslo_concurrency.lockutils [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.739 351492 DEBUG nova.network.neutron [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.746 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start _get_guest_xml network_info=[{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.761 351492 WARNING nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.777 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.779 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.786 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.786 351492 DEBUG nova.virt.libvirt.host [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.787 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.788 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.789 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.789 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.790 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.790 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.791 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.791 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.791 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.792 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.793 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.793 351492 DEBUG nova.virt.hardware [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:15:56 compute-0 nova_compute[351485]: 2025-12-03 02:15:56.799 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 973 KiB/s rd, 4.5 MiB/s wr, 135 op/s
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.185 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.187 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.188 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.188 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.189 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Processing event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.189 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.190 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.190 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.191 351492 DEBUG oslo_concurrency.lockutils [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.191 351492 DEBUG nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] No waiting events found dispatching network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.192 351492 WARNING nova.compute.manager [req-a844ea0a-12cc-4eda-8959-94113d5ecc62 req-15dff412-6a6f-4f4f-bba4-4ea3bb817276 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received unexpected event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e for instance with vm_state building and task_state spawning.#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.194 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.216 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728157.2033532, 07ce21e6-3627-467a-9b7e-d9045308576c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.217 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.230 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.264 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.267 351492 INFO nova.virt.libvirt.driver [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance spawned successfully.#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.268 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.278 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.306 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.317 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.318 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.320 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.321 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843984994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.323 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.324 351492 DEBUG nova.virt.libvirt.driver [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.351 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.384 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.392 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.452 351492 INFO nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 17.76 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.453 351492 DEBUG nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.514 351492 INFO nova.compute.manager [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 18.92 seconds to build instance.#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.526 351492 DEBUG oslo_concurrency.lockutils [None req-f6afe543-8719-4d12-9fcb-f0756ad42295 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:15:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966741818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.903 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.905 351492 DEBUG nova.virt.libvirt.vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1318824371',display_name='tempest-ServersTestManualDisk-server-1318824371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1318824371',id=9,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjjprZxgO/4fBzfH66ApAPdvyVvzXxf8Ff5aorWRcZSUbk0SJJUQELjud9zhnFrHG5MNyoaXEfhhqd7MMh1lMDbphtAOFjo2kbDR4EPXiA+56V0JD9bhhKqPo/y7SQ3BA==',key_name='tempest-keypair-1645493537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5875dd9a17274c38a2ae81fb3759558e',ramdisk_id='',reservation_id='r-a0h400yy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-632797169',owner_user_name='tempest-ServersTestManualDisk-632797169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dc5f09973d5430fb9d8106a1a0a2479',uuid=5c870f25-6c33-4e95-b540-5a806454f556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.905 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converting VIF {"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.906 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.908 351492 DEBUG nova.objects.instance [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c870f25-6c33-4e95-b540-5a806454f556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.928 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <uuid>5c870f25-6c33-4e95-b540-5a806454f556</uuid>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <name>instance-00000009</name>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <nova:name>tempest-ServersTestManualDisk-server-1318824371</nova:name>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:15:56</nova:creationTime>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:user uuid="4dc5f09973d5430fb9d8106a1a0a2479">tempest-ServersTestManualDisk-632797169-project-member</nova:user>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:project uuid="5875dd9a17274c38a2ae81fb3759558e">tempest-ServersTestManualDisk-632797169</nova:project>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <nova:port uuid="d7b1b965-f304-40eb-9f34-c63af54da9f4">
Dec  3 02:15:57 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <system>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <entry name="serial">5c870f25-6c33-4e95-b540-5a806454f556</entry>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <entry name="uuid">5c870f25-6c33-4e95-b540-5a806454f556</entry>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </system>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <os>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  </os>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <features>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  </features>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/5c870f25-6c33-4e95-b540-5a806454f556_disk">
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/5c870f25-6c33-4e95-b540-5a806454f556_disk.config">
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      </source>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:15:57 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:57:b1:4a"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <target dev="tapd7b1b965-f3"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/console.log" append="off"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <video>
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </video>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:15:57 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:15:57 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:15:57 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:15:57 compute-0 nova_compute[351485]: </domain>
Dec  3 02:15:57 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.929 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Preparing to wait for external event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.929 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.937 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.938 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.938 351492 DEBUG nova.virt.libvirt.vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1318824371',display_name='tempest-ServersTestManualDisk-server-1318824371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1318824371',id=9,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjjprZxgO/4fBzfH66ApAPdvyVvzXxf8Ff5aorWRcZSUbk0SJJUQELjud9zhnFrHG5MNyoaXEfhhqd7MMh1lMDbphtAOFjo2kbDR4EPXiA+56V0JD9bhhKqPo/y7SQ3BA==',key_name='tempest-keypair-1645493537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5875dd9a17274c38a2ae81fb3759558e',ramdisk_id='',reservation_id='r-a0h400yy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-632797169',owner_user_name='tempest-ServersTestManualDisk-632797169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:15:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dc5f09973d5430fb9d8106a1a0a2479',uuid=5c870f25-6c33-4e95-b540-5a806454f556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.939 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converting VIF {"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.939 351492 DEBUG nova.network.os_vif_util [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.939 351492 DEBUG os_vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.940 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.940 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.941 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.946 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.946 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7b1b965-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.948 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7b1b965-f3, col_values=(('external_ids', {'iface-id': 'd7b1b965-f304-40eb-9f34-c63af54da9f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:b1:4a', 'vm-uuid': '5c870f25-6c33-4e95-b540-5a806454f556'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:57 compute-0 NetworkManager[48912]: <info>  [1764728157.9536] manager: (tapd7b1b965-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.956 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:57 compute-0 nova_compute[351485]: 2025-12-03 02:15:57.963 351492 INFO os_vif [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3')#033[00m
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.042 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.045 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.047 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] No VIF found with MAC fa:16:3e:57:b1:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.049 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Using config drive#033[00m
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.113 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:15:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.961 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Creating config drive at /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config#033[00m
Dec  3 02:15:58 compute-0 nova_compute[351485]: 2025-12-03 02:15:58.977 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpryjnql8w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 954 KiB/s rd, 2.5 MiB/s wr, 107 op/s
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.015 351492 DEBUG nova.network.neutron [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updated VIF entry in instance network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.018 351492 DEBUG nova.network.neutron [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.059 351492 DEBUG oslo_concurrency.lockutils [req-97117c39-91ae-44e4-8a6d-841fe7460c05 req-b3afc542-40e1-4692-98bd-3e3ebf2fb43a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.137 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpryjnql8w" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.204 351492 DEBUG nova.storage.rbd_utils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] rbd image 5c870f25-6c33-4e95-b540-5a806454f556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.216 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config 5c870f25-6c33-4e95-b540-5a806454f556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.410 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.413 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.414 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.415 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.416 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Processing event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.416 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.417 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.418 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.418 351492 DEBUG oslo_concurrency.lockutils [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.419 351492 DEBUG nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.420 351492 WARNING nova.compute.manager [req-1e05a413-a054-4686-9154-f9eb71480fa4 req-7f0a23b3-97f3-40df-aa35-b6daaed12592 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state building and task_state spawning.#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.427 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.432 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728159.4323306, a48b4084-369d-432a-9f47-9378cdcc011f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.435 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.441 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.447 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance spawned successfully.#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.447 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.464 351492 DEBUG oslo_concurrency.processutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config 5c870f25-6c33-4e95-b540-5a806454f556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.464 351492 INFO nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deleting local config drive /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556/disk.config because it was imported into RBD.#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.472 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.489 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.497 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.498 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.498 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.498 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.499 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.499 351492 DEBUG nova.virt.libvirt.driver [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:15:59 compute-0 kernel: tapd7b1b965-f3: entered promiscuous mode
Dec  3 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.5274] manager: (tapd7b1b965-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec  3 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00081|binding|INFO|Claiming lport d7b1b965-f304-40eb-9f34-c63af54da9f4 for this chassis.
Dec  3 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00082|binding|INFO|d7b1b965-f304-40eb-9f34-c63af54da9f4: Claiming fa:16:3e:57:b1:4a 10.100.0.3
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.531 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.535 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.540 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:b1:4a 10.100.0.3'], port_security=['fa:16:3e:57:b1:4a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c870f25-6c33-4e95-b540-5a806454f556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5875dd9a17274c38a2ae81fb3759558e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '286ce87f-1fc2-4f0d-bf8b-2c43a617c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6691e56-1a9f-42fd-b8af-9a3ce340219b, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d7b1b965-f304-40eb-9f34-c63af54da9f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.542 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d7b1b965-f304-40eb-9f34-c63af54da9f4 in datapath e0e44891-e46c-41a0-a083-a444c0d34e1c bound to our chassis#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.544 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0e44891-e46c-41a0-a083-a444c0d34e1c#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.560 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[bd18e317-d5cd-41d8-a71e-b37fe49abd8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.561 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape0e44891-e1 in ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.563 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape0e44891-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.563 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[45e67f50-3be6-4e35-a5ff-742d44d17b8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 systemd-udevd[445363]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.567 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.565 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3f630432-2c30-4c18-bcd3-c8f7476bcc12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00083|binding|INFO|Setting lport d7b1b965-f304-40eb-9f34-c63af54da9f4 ovn-installed in OVS
Dec  3 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00084|binding|INFO|Setting lport d7b1b965-f304-40eb-9f34-c63af54da9f4 up in Southbound
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.576 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.5784] device (tapd7b1b965-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.5848] device (tapd7b1b965-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.589 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[6b92c699-97cc-4efb-ab31-a366ed4d5e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 systemd-machined[138558]: New machine qemu-9-instance-00000009.
Dec  3 02:15:59 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.607 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d3225e89-6ef2-4066-90b3-bde95b2f68c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.615 351492 INFO nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 14.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.616 351492 DEBUG nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.644 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6256d7-54d1-4406-87b3-cbdca7f1b98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.647 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.648 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.659 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[05f61d24-6b23-447d-974c-f49a010638cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.6613] manager: (tape0e44891-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec  3 02:15:59 compute-0 systemd-udevd[445367]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.697 351492 INFO nova.compute.manager [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 16.03 seconds to build instance.#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.696 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0ede713e-8ebf-4496-a4fd-6751c79ff0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.700 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[60634159-b2fd-4d22-972c-2ff38f4e9b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.7228] device (tape0e44891-e0): carrier: link connected
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.724 351492 DEBUG oslo_concurrency.lockutils [None req-7f403891-6a97-4ed3-83e5-ea039cf88bf5 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.731 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0fb949-90ac-43f1-9d7b-2402b5cd6e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 podman[158098]: time="2025-12-03T02:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.749 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[679625d6-6cd2-46f4-b23c-eafdcc37ebe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0e44891-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:3e:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699920, 'reachable_time': 28264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445398, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46278 "" "Go-http-client/1.1"
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.769 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[055b0be9-5b55-486c-b692-192322aeb779]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:3ef8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699920, 'tstamp': 699920}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445399, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9569 "" "Go-http-client/1.1"
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.789 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[afe4d630-1954-4166-b6c5-adacf80f0dea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0e44891-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:3e:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699920, 'reachable_time': 28264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445400, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.844 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e60ed30b-9c79-4e95-a44d-50849059ac2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.933 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3094a525-413a-404d-829a-7f60209d56a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.934 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0e44891-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.934 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.935 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0e44891-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:59 compute-0 kernel: tape0e44891-e0: entered promiscuous mode
Dec  3 02:15:59 compute-0 NetworkManager[48912]: <info>  [1764728159.9392] manager: (tape0e44891-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.945 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.947 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0e44891-e0, col_values=(('external_ids', {'iface-id': 'c4f9e2ab-5c50-4335-91f7-b4ae67182674'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:15:59 compute-0 ovn_controller[89134]: 2025-12-03T02:15:59Z|00085|binding|INFO|Releasing lport c4f9e2ab-5c50-4335-91f7-b4ae67182674 from this chassis (sb_readonly=0)
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.953 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e0e44891-e46c-41a0-a083-a444c0d34e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e0e44891-e46c-41a0-a083-a444c0d34e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.954 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ef25275c-65f2-4492-992d-3482eae26fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.955 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-e0e44891-e46c-41a0-a083-a444c0d34e1c
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/e0e44891-e46c-41a0-a083-a444c0d34e1c.pid.haproxy
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID e0e44891-e46c-41a0-a083-a444c0d34e1c
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:15:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:15:59.956 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'env', 'PROCESS_TAG=haproxy-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e0e44891-e46c-41a0-a083-a444c0d34e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:15:59 compute-0 nova_compute[351485]: 2025-12-03 02:15:59.963 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.259 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728160.258395, 5c870f25-6c33-4e95-b540-5a806454f556 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.259 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Started (Lifecycle Event)#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.283 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.290 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728160.2634172, 5c870f25-6c33-4e95-b540-5a806454f556 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.290 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.319 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.324 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:16:00 compute-0 nova_compute[351485]: 2025-12-03 02:16:00.345 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.453833723 +0000 UTC m=+0.090165162 container create 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  3 02:16:00 compute-0 systemd[1]: Started libpod-conmon-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e.scope.
Dec  3 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.411602088 +0000 UTC m=+0.047933547 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:16:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5295931454d2be4766609de4f9590642eff52873c1c45af103b232bf8f6acedc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.559893653 +0000 UTC m=+0.196225102 container init 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:16:00 compute-0 podman[445471]: 2025-12-03 02:16:00.56826858 +0000 UTC m=+0.204600009 container start 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  3 02:16:00 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : New worker (445492) forked
Dec  3 02:16:00 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : Loading success.
Dec  3 02:16:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Dec  3 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:16:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:16:01 compute-0 openstack_network_exporter[368278]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:16:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00086|binding|INFO|Releasing lport 450cbc12-7d6b-43b0-b43f-cc78dcc16b25 from this chassis (sb_readonly=0)
Dec  3 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00087|binding|INFO|Releasing lport c4f9e2ab-5c50-4335-91f7-b4ae67182674 from this chassis (sb_readonly=0)
Dec  3 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00088|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:16:01 compute-0 ovn_controller[89134]: 2025-12-03T02:16:01Z|00089|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec  3 02:16:01 compute-0 nova_compute[351485]: 2025-12-03 02:16:01.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.555 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.556 351492 DEBUG nova.network.neutron [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:16:02 compute-0 nova_compute[351485]: 2025-12-03 02:16:02.952 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 929 KiB/s wr, 149 op/s
Dec  3 02:16:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:03 compute-0 nova_compute[351485]: 2025-12-03 02:16:03.548 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 245 KiB/s wr, 154 op/s
Dec  3 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG nova.compute.manager [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG nova.compute.manager [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing instance network info cache due to event network-changed-5009f27c-5ce3-46eb-b7aa-e82645a3097e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG oslo_concurrency.lockutils [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.348 351492 DEBUG oslo_concurrency.lockutils [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:05 compute-0 nova_compute[351485]: 2025-12-03 02:16:05.349 351492 DEBUG nova.network.neutron [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Refreshing network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.373 351492 DEBUG nova.network.neutron [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.374 351492 DEBUG nova.network.neutron [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.403 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.404 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Processing event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.405 351492 DEBUG oslo_concurrency.lockutils [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.406 351492 DEBUG nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] No waiting events found dispatching network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.406 351492 WARNING nova.compute.manager [req-167665d1-3bf5-4700-a874-dfa27ebcdbc4 req-1e24f9ba-9734-4e11-a627-49bad6522236 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received unexpected event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 for instance with vm_state building and task_state spawning.#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.406 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.415 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.417 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728166.4138896, 5c870f25-6c33-4e95-b540-5a806454f556 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.418 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.428 351492 INFO nova.virt.libvirt.driver [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance spawned successfully.#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.428 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.443 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.468 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.482 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.485 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.494 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.503 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.504 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.506 351492 DEBUG nova.virt.libvirt.driver [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.516 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.577 351492 INFO nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 17.69 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.577 351492 DEBUG nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.668 351492 INFO nova.compute.manager [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 18.88 seconds to build instance.#033[00m
Dec  3 02:16:06 compute-0 nova_compute[351485]: 2025-12-03 02:16:06.689 351492 DEBUG oslo_concurrency.lockutils [None req-5759bfe2-1f84-49df-a0f3-42e73b2e1800 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 29 KiB/s wr, 204 op/s
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.343 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.344 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.345 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.346 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.346 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.350 351492 INFO nova.compute.manager [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Terminating instance#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.354 351492 DEBUG nova.compute.manager [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:16:07 compute-0 kernel: tap5009f27c-5c (unregistering): left promiscuous mode
Dec  3 02:16:07 compute-0 NetworkManager[48912]: <info>  [1764728167.4446] device (tap5009f27c-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:16:07 compute-0 ovn_controller[89134]: 2025-12-03T02:16:07Z|00090|binding|INFO|Releasing lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e from this chassis (sb_readonly=0)
Dec  3 02:16:07 compute-0 ovn_controller[89134]: 2025-12-03T02:16:07Z|00091|binding|INFO|Setting lport 5009f27c-5ce3-46eb-b7aa-e82645a3097e down in Southbound
Dec  3 02:16:07 compute-0 ovn_controller[89134]: 2025-12-03T02:16:07Z|00092|binding|INFO|Removing iface tap5009f27c-5c ovn-installed in OVS
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.467 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.469 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:ad:09 10.100.0.10'], port_security=['fa:16:3e:3a:ad:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07ce21e6-3627-467a-9b7e-d9045308576c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a1cf3657daa4d798d912ceaae049aa0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3e8f04e-3c5d-406e-b48c-aa69bd7ba1c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=427d4c89-de71-4fff-872a-bb6406d77b1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=5009f27c-5ce3-46eb-b7aa-e82645a3097e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.471 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 5009f27c-5ce3-46eb-b7aa-e82645a3097e in datapath 9f9dd264-e73a-4200-ba74-0833c40bd14c unbound from our chassis#033[00m
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.473 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f9dd264-e73a-4200-ba74-0833c40bd14c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.474 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[86b9ba90-a011-4ef9-b147-559db4b07bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.475 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c namespace which is not needed anymore#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.505 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  3 02:16:07 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 10.988s CPU time.
Dec  3 02:16:07 compute-0 systemd-machined[138558]: Machine qemu-7-instance-00000007 terminated.
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.581 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.593 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.600 351492 INFO nova.virt.libvirt.driver [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Instance destroyed successfully.#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.601 351492 DEBUG nova.objects.instance [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lazy-loading 'resources' on Instance uuid 07ce21e6-3627-467a-9b7e-d9045308576c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : haproxy version is 2.8.14-c23fe91
Dec  3 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [NOTICE]   (444943) : path to executable is /usr/sbin/haproxy
Dec  3 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [WARNING]  (444943) : Exiting Master process...
Dec  3 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [ALERT]    (444943) : Current worker (444946) exited with code 143 (Terminated)
Dec  3 02:16:07 compute-0 neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c[444939]: [WARNING]  (444943) : All workers exited. Exiting... (0)
Dec  3 02:16:07 compute-0 systemd[1]: libpod-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b.scope: Deactivated successfully.
Dec  3 02:16:07 compute-0 podman[445529]: 2025-12-03 02:16:07.72274184 +0000 UTC m=+0.068545590 container died 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  3 02:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b-userdata-shm.mount: Deactivated successfully.
Dec  3 02:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c2115dbbdd79e6878ea3d1b5fd20b2e30c3ab979ab90b0f907915a9dad459d-merged.mount: Deactivated successfully.
Dec  3 02:16:07 compute-0 podman[445529]: 2025-12-03 02:16:07.811122521 +0000 UTC m=+0.156926271 container cleanup 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 02:16:07 compute-0 systemd[1]: libpod-conmon-7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b.scope: Deactivated successfully.
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.840 351492 DEBUG nova.virt.libvirt.vif [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1673813976',display_name='tempest-ServersTestJSON-server-1673813976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1673813976',id=7,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJYX2+s+Cn7+6pt2DjGw9oFEuqJNIKKTlZXH+fYJLmbL39TCISRXMer1dBsYcpnaM6SERWPVMBKkG2FwLQyhKQV9uLnyTX7LXwX8AMU3L/hKCWN57p10Cgl0YPkCXm4JFA==',key_name='tempest-keypair-555022383',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a1cf3657daa4d798d912ceaae049aa0',ramdisk_id='',reservation_id='r-cpufgz7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-263993337',owner_user_name='tempest-ServersTestJSON-263993337-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:15:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7f624afcf845f786397f8aa1bb2a63',uuid=07ce21e6-3627-467a-9b7e-d9045308576c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.841 351492 DEBUG nova.network.os_vif_util [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converting VIF {"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.842 351492 DEBUG nova.network.os_vif_util [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.842 351492 DEBUG os_vif [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.845 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.845 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5009f27c-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.847 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.849 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.854 351492 INFO os_vif [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:ad:09,bridge_name='br-int',has_traffic_filtering=True,id=5009f27c-5ce3-46eb-b7aa-e82645a3097e,network=Network(9f9dd264-e73a-4200-ba74-0833c40bd14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5009f27c-5c')#033[00m
Dec  3 02:16:07 compute-0 podman[445558]: 2025-12-03 02:16:07.970402787 +0000 UTC m=+0.085901911 container remove 7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.981 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a5550bb8-bcea-4460-addd-2d8abd3e8b0d]: (4, ('Wed Dec  3 02:16:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c (7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b)\n7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b\nWed Dec  3 02:16:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c (7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b)\n7d58250e52fa06f3751bdde305da6190b3c31d1e06120140edcca924bfc1ed7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.984 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[dbac2887-cea6-4e63-8fcf-5179ac190cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.985 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f9dd264-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:07 compute-0 nova_compute[351485]: 2025-12-03 02:16:07.987 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:07 compute-0 kernel: tap9f9dd264-e0: left promiscuous mode
Dec  3 02:16:07 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:07.996 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac73b6b-6ca0-43fe-975d-477b52005d09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.008 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5528bfa7-4dff-4e94-9972-2ed71674e4c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.008 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5d343ea7-5797-40e1-b62f-75884d85f3b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.041 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a20e9f-fd68-4b72-b4ca-fce5b6d9781e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698941, 'reachable_time': 31416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445594, 'error': None, 'target': 'ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d9f9dd264\x2de73a\x2d4200\x2dba74\x2d0833c40bd14c.mount: Deactivated successfully.
Dec  3 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.049 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f9dd264-e73a-4200-ba74-0833c40bd14c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:16:08 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:08.049 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[c229aeeb-22a8-4601-a0d8-be70078dfc9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:08 compute-0 podman[445588]: 2025-12-03 02:16:08.145500881 +0000 UTC m=+0.109144039 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  3 02:16:08 compute-0 podman[445589]: 2025-12-03 02:16:08.154805475 +0000 UTC m=+0.127239161 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 02:16:08 compute-0 podman[445591]: 2025-12-03 02:16:08.163311845 +0000 UTC m=+0.128278590 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:16:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.355 351492 DEBUG nova.compute.manager [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.356 351492 DEBUG nova.compute.manager [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing instance network info cache due to event network-changed-ee5c2dfc-04c3-400a-8073-6f2c65dcea03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.356 351492 DEBUG oslo_concurrency.lockutils [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.357 351492 DEBUG oslo_concurrency.lockutils [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.358 351492 DEBUG nova.network.neutron [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Refreshing network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.549 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.627 351492 INFO nova.virt.libvirt.driver [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deleting instance files /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c_del#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.628 351492 INFO nova.virt.libvirt.driver [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deletion of /var/lib/nova/instances/07ce21e6-3627-467a-9b7e-d9045308576c_del complete#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.654 351492 DEBUG nova.network.neutron [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updated VIF entry in instance network info cache for port 5009f27c-5ce3-46eb-b7aa-e82645a3097e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.654 351492 DEBUG nova.network.neutron [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [{"id": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "address": "fa:16:3e:3a:ad:09", "network": {"id": "9f9dd264-e73a-4200-ba74-0833c40bd14c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1921093277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a1cf3657daa4d798d912ceaae049aa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5009f27c-5c", "ovs_interfaceid": "5009f27c-5ce3-46eb-b7aa-e82645a3097e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.711 351492 DEBUG oslo_concurrency.lockutils [req-52be11f6-e2e6-4fcb-a52e-8093698d9b4b req-a1c64abf-def1-4843-9200-13a0e89e6fa4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-07ce21e6-3627-467a-9b7e-d9045308576c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.727 351492 INFO nova.compute.manager [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 1.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.732 351492 DEBUG oslo.service.loopingcall [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.733 351492 DEBUG nova.compute.manager [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:16:08 compute-0 nova_compute[351485]: 2025-12-03 02:16:08.733 351492 DEBUG nova.network.neutron [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:16:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 243 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 14 KiB/s wr, 173 op/s
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.575 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-unplugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.576 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.577 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.578 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.580 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] No waiting events found dispatching network-vif-unplugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.581 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-unplugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.582 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.583 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.584 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.585 351492 DEBUG oslo_concurrency.lockutils [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.589 351492 DEBUG nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] No waiting events found dispatching network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.590 351492 WARNING nova.compute.manager [req-73b50b9a-ae0c-4d60-aa21-f9c1a7978c37 req-1249855b-167c-48bb-b5e3-5c2b5885a45a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received unexpected event network-vif-plugged-5009f27c-5ce3-46eb-b7aa-e82645a3097e for instance with vm_state active and task_state deleting.#033[00m
Dec  3 02:16:10 compute-0 nova_compute[351485]: 2025-12-03 02:16:10.946 351492 DEBUG nova.network.neutron [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 208 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 15 KiB/s wr, 215 op/s
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.098 351492 INFO nova.compute.manager [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Took 2.36 seconds to deallocate network for instance.#033[00m
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.150 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.150 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.419 351492 DEBUG oslo_concurrency.processutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:16:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998216873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.897 351492 DEBUG oslo_concurrency.processutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.910 351492 DEBUG nova.compute.provider_tree [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.941 351492 DEBUG nova.scheduler.client.report [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.970 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:11 compute-0 nova_compute[351485]: 2025-12-03 02:16:11.996 351492 INFO nova.scheduler.client.report [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Deleted allocations for instance 07ce21e6-3627-467a-9b7e-d9045308576c#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.068 351492 DEBUG oslo_concurrency.lockutils [None req-8c22aebe-246d-4047-89f2-89ae300ee2d9 8a7f624afcf845f786397f8aa1bb2a63 5a1cf3657daa4d798d912ceaae049aa0 - - default default] Lock "07ce21e6-3627-467a-9b7e-d9045308576c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.096 351492 DEBUG nova.compute.manager [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.097 351492 DEBUG nova.compute.manager [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing instance network info cache due to event network-changed-d7b1b965-f304-40eb-9f34-c63af54da9f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.097 351492 DEBUG oslo_concurrency.lockutils [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.097 351492 DEBUG oslo_concurrency.lockutils [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.098 351492 DEBUG nova.network.neutron [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Refreshing network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.125 351492 DEBUG nova.network.neutron [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated VIF entry in instance network info cache for port ee5c2dfc-04c3-400a-8073-6f2c65dcea03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.125 351492 DEBUG nova.network.neutron [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.171 351492 DEBUG oslo_concurrency.lockutils [req-064855b2-5ebb-4bc5-a297-0f3ceb3ccca6 req-da2d9a38-81d2-4b33-8f5a-f3c600ed8da8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.703 351492 DEBUG nova.compute.manager [req-bc0a4e1e-17e7-4ce6-8594-358cdd016f6a req-1bb46a82-c45e-4be4-9daf-1587168e5168 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Received event network-vif-deleted-5009f27c-5ce3-46eb-b7aa-e82645a3097e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:12 compute-0 nova_compute[351485]: 2025-12-03 02:16:12.849 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 196 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 15 KiB/s wr, 193 op/s
Dec  3 02:16:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.433 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.434 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.435 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.436 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.437 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.440 351492 INFO nova.compute.manager [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Terminating instance#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.443 351492 DEBUG nova.compute.manager [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:16:13 compute-0 kernel: tapd7b1b965-f3 (unregistering): left promiscuous mode
Dec  3 02:16:13 compute-0 NetworkManager[48912]: <info>  [1764728173.5232] device (tapd7b1b965-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:16:13 compute-0 ovn_controller[89134]: 2025-12-03T02:16:13Z|00093|binding|INFO|Releasing lport d7b1b965-f304-40eb-9f34-c63af54da9f4 from this chassis (sb_readonly=0)
Dec  3 02:16:13 compute-0 ovn_controller[89134]: 2025-12-03T02:16:13Z|00094|binding|INFO|Setting lport d7b1b965-f304-40eb-9f34-c63af54da9f4 down in Southbound
Dec  3 02:16:13 compute-0 ovn_controller[89134]: 2025-12-03T02:16:13Z|00095|binding|INFO|Removing iface tapd7b1b965-f3 ovn-installed in OVS
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.542 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.551 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:b1:4a 10.100.0.3'], port_security=['fa:16:3e:57:b1:4a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c870f25-6c33-4e95-b540-5a806454f556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5875dd9a17274c38a2ae81fb3759558e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '286ce87f-1fc2-4f0d-bf8b-2c43a617c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6691e56-1a9f-42fd-b8af-9a3ce340219b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=d7b1b965-f304-40eb-9f34-c63af54da9f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.552 288528 INFO neutron.agent.ovn.metadata.agent [-] Port d7b1b965-f304-40eb-9f34-c63af54da9f4 in datapath e0e44891-e46c-41a0-a083-a444c0d34e1c unbound from our chassis#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.554 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0e44891-e46c-41a0-a083-a444c0d34e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.555 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c5123a-aab9-42c9-a5c3-8e2319550794]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.556 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c namespace which is not needed anymore#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.567 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.570 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  3 02:16:13 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 8.094s CPU time.
Dec  3 02:16:13 compute-0 systemd-machined[138558]: Machine qemu-9-instance-00000009 terminated.
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.675 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.681 351492 INFO nova.virt.libvirt.driver [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Instance destroyed successfully.#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.682 351492 DEBUG nova.objects.instance [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lazy-loading 'resources' on Instance uuid 5c870f25-6c33-4e95-b540-5a806454f556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.705 351492 DEBUG nova.virt.libvirt.vif [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1318824371',display_name='tempest-ServersTestManualDisk-server-1318824371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1318824371',id=9,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjjprZxgO/4fBzfH66ApAPdvyVvzXxf8Ff5aorWRcZSUbk0SJJUQELjud9zhnFrHG5MNyoaXEfhhqd7MMh1lMDbphtAOFjo2kbDR4EPXiA+56V0JD9bhhKqPo/y7SQ3BA==',key_name='tempest-keypair-1645493537',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:16:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5875dd9a17274c38a2ae81fb3759558e',ramdisk_id='',reservation_id='r-a0h400yy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-632797169',owner_user_name='tempest-ServersTestManualDisk-632797169-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:16:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dc5f09973d5430fb9d8106a1a0a2479',uuid=5c870f25-6c33-4e95-b540-5a806454f556,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.706 351492 DEBUG nova.network.os_vif_util [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converting VIF {"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.707 351492 DEBUG nova.network.os_vif_util [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.708 351492 DEBUG os_vif [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.710 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7b1b965-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.714 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.716 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.719 351492 INFO os_vif [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:b1:4a,bridge_name='br-int',has_traffic_filtering=True,id=d7b1b965-f304-40eb-9f34-c63af54da9f4,network=Network(e0e44891-e46c-41a0-a083-a444c0d34e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7b1b965-f3')#033[00m
Dec  3 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : haproxy version is 2.8.14-c23fe91
Dec  3 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [NOTICE]   (445490) : path to executable is /usr/sbin/haproxy
Dec  3 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [WARNING]  (445490) : Exiting Master process...
Dec  3 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [WARNING]  (445490) : Exiting Master process...
Dec  3 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [ALERT]    (445490) : Current worker (445492) exited with code 143 (Terminated)
Dec  3 02:16:13 compute-0 neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c[445486]: [WARNING]  (445490) : All workers exited. Exiting... (0)
Dec  3 02:16:13 compute-0 systemd[1]: libpod-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e.scope: Deactivated successfully.
Dec  3 02:16:13 compute-0 podman[445693]: 2025-12-03 02:16:13.751508862 +0000 UTC m=+0.073720617 container died 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e-userdata-shm.mount: Deactivated successfully.
Dec  3 02:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5295931454d2be4766609de4f9590642eff52873c1c45af103b232bf8f6acedc-merged.mount: Deactivated successfully.
Dec  3 02:16:13 compute-0 podman[445693]: 2025-12-03 02:16:13.798766789 +0000 UTC m=+0.120978544 container cleanup 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 02:16:13 compute-0 systemd[1]: libpod-conmon-51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e.scope: Deactivated successfully.
Dec  3 02:16:13 compute-0 podman[445747]: 2025-12-03 02:16:13.90942788 +0000 UTC m=+0.080638943 container remove 51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.931 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ea9797-d70b-4274-89c2-99e046fd2c6d]: (4, ('Wed Dec  3 02:16:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c (51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e)\n51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e\nWed Dec  3 02:16:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c (51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e)\n51794d70088c7f895c2aa96abef09844a97a7dca0471ddcb8ca433f0a3cc397e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.934 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[04b02f8c-d068-4115-acf6-8379634c30bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.936 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0e44891-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.939 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 kernel: tape0e44891-e0: left promiscuous mode
Dec  3 02:16:13 compute-0 nova_compute[351485]: 2025-12-03 02:16:13.958 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.960 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[06de364f-4693-4c05-94df-04d115168e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.977 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[098f7c3f-fb1f-474e-b960-4881ce8e254d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.979 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9a7e7b-69e8-4cdd-b637-7137e4116e9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:13.998 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae56b59-efe0-4aae-8823-b0466958ba54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699911, 'reachable_time': 17141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445769, 'error': None, 'target': 'ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:14 compute-0 systemd[1]: run-netns-ovnmeta\x2de0e44891\x2de46c\x2d41a0\x2da083\x2da444c0d34e1c.mount: Deactivated successfully.
Dec  3 02:16:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:14.004 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e0e44891-e46c-41a0-a083-a444c0d34e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:16:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:14.005 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd2ba20-24ed-457f-a8e8-d035c98dd6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:14 compute-0 podman[445760]: 2025-12-03 02:16:14.068714096 +0000 UTC m=+0.096047278 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.296 351492 DEBUG nova.compute.manager [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-unplugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.297 351492 DEBUG oslo_concurrency.lockutils [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.298 351492 DEBUG oslo_concurrency.lockutils [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.299 351492 DEBUG oslo_concurrency.lockutils [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.299 351492 DEBUG nova.compute.manager [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] No waiting events found dispatching network-vif-unplugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.300 351492 DEBUG nova.compute.manager [req-028d9948-40f3-4be7-abe5-e24cc023786e req-f81e1ad6-af91-4984-acf3-625e90b9fb45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-unplugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.408 351492 DEBUG nova.network.neutron [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updated VIF entry in instance network info cache for port d7b1b965-f304-40eb-9f34-c63af54da9f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.409 351492 DEBUG nova.network.neutron [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [{"id": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "address": "fa:16:3e:57:b1:4a", "network": {"id": "e0e44891-e46c-41a0-a083-a444c0d34e1c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-900280430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5875dd9a17274c38a2ae81fb3759558e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7b1b965-f3", "ovs_interfaceid": "d7b1b965-f304-40eb-9f34-c63af54da9f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.443 351492 DEBUG oslo_concurrency.lockutils [req-3ba158f2-72b8-4ac7-ab51-5599d42ef0d2 req-83bca034-7f21-431a-8536-fc66784c51a6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-5c870f25-6c33-4e95-b540-5a806454f556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.511 351492 INFO nova.virt.libvirt.driver [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deleting instance files /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556_del#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.512 351492 INFO nova.virt.libvirt.driver [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deletion of /var/lib/nova/instances/5c870f25-6c33-4e95-b540-5a806454f556_del complete#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.598 351492 INFO nova.compute.manager [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.599 351492 DEBUG oslo.service.loopingcall [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.600 351492 DEBUG nova.compute.manager [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.601 351492 DEBUG nova.network.neutron [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.617 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.619 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:16:14 compute-0 nova_compute[351485]: 2025-12-03 02:16:14.619 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 188 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 KiB/s wr, 153 op/s
Dec  3 02:16:15 compute-0 ovn_controller[89134]: 2025-12-03T02:16:15Z|00096|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:16:15 compute-0 ovn_controller[89134]: 2025-12-03T02:16:15Z|00097|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec  3 02:16:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:16:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444078930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.250 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.350 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.351 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.360 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.361 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.883 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.884 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3676MB free_disk=59.92551803588867GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.885 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.885 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.976 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4f50e501-f565-4e1f-aa02-df921702eff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance a48b4084-369d-432a-9f47-9378cdcc011f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.977 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 5c870f25-6c33-4e95-b540-5a806454f556 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.978 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:16:15 compute-0 nova_compute[351485]: 2025-12-03 02:16:15.978 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.064 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.476 351492 DEBUG nova.network.neutron [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.495 351492 INFO nova.compute.manager [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Took 1.89 seconds to deallocate network for instance.#033[00m
Dec  3 02:16:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:16:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578666258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.565 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.572 351492 DEBUG nova.compute.manager [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.574 351492 DEBUG oslo_concurrency.lockutils [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "5c870f25-6c33-4e95-b540-5a806454f556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.575 351492 DEBUG oslo_concurrency.lockutils [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.576 351492 DEBUG oslo_concurrency.lockutils [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.577 351492 DEBUG nova.compute.manager [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] No waiting events found dispatching network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.578 351492 WARNING nova.compute.manager [req-34407a4d-bcef-46ff-b68e-b4f7896160dd req-fc75dec1-c09b-4d69-a8d6-a36917745f24 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received unexpected event network-vif-plugged-d7b1b965-f304-40eb-9f34-c63af54da9f4 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.585 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.601 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.633 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.675 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.676 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.676 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:16 compute-0 nova_compute[351485]: 2025-12-03 02:16:16.777 351492 DEBUG oslo_concurrency.processutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.3 KiB/s wr, 179 op/s
Dec  3 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.154 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:16:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085630615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.304 351492 DEBUG oslo_concurrency.processutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.316 351492 DEBUG nova.compute.provider_tree [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.342 351492 DEBUG nova.scheduler.client.report [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.380 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.413 351492 INFO nova.scheduler.client.report [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Deleted allocations for instance 5c870f25-6c33-4e95-b540-5a806454f556#033[00m
Dec  3 02:16:17 compute-0 nova_compute[351485]: 2025-12-03 02:16:17.499 351492 DEBUG oslo_concurrency.lockutils [None req-d59ee5b9-db12-421e-b341-192c745e8bf7 4dc5f09973d5430fb9d8106a1a0a2479 5875dd9a17274c38a2ae81fb3759558e - - default default] Lock "5c870f25-6c33-4e95-b540-5a806454f556" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.570 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.677 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.678 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.679 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.714 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.769 351492 DEBUG nova.compute.manager [req-4b9ae855-b20e-437d-a2c2-31b7f0ea226d req-131e62cc-9819-4296-a7c0-ab975f3c47a9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Received event network-vif-deleted-d7b1b965-f304-40eb-9f34-c63af54da9f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:18 compute-0 podman[445852]: 2025-12-03 02:16:18.87808972 +0000 UTC m=+0.094743452 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:16:18 compute-0 podman[445853]: 2025-12-03 02:16:18.88622462 +0000 UTC m=+0.120609413 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.expose-services=, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, config_id=edpm, name=ubi9)
Dec  3 02:16:18 compute-0 podman[445864]: 2025-12-03 02:16:18.891985753 +0000 UTC m=+0.112681719 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:16:18 compute-0 podman[445851]: 2025-12-03 02:16:18.905190107 +0000 UTC m=+0.146950539 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:16:18 compute-0 podman[445850]: 2025-12-03 02:16:18.934884347 +0000 UTC m=+0.182881736 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.960 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.961 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.961 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:16:18 compute-0 nova_compute[351485]: 2025-12-03 02:16:18.961 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:16:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec  3 02:16:19 compute-0 ovn_controller[89134]: 2025-12-03T02:16:19Z|00098|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:16:19 compute-0 ovn_controller[89134]: 2025-12-03T02:16:19Z|00099|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec  3 02:16:19 compute-0 nova_compute[351485]: 2025-12-03 02:16:19.453 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec  3 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.049 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:21 compute-0 nova_compute[351485]: 2025-12-03 02:16:21.112 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:21 compute-0 podman[446121]: 2025-12-03 02:16:21.651352883 +0000 UTC m=+0.122950569 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:16:21 compute-0 podman[446121]: 2025-12-03 02:16:21.776389501 +0000 UTC m=+0.247987187 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.005 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.590 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.595 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728167.59362, 07ce21e6-3627-467a-9b7e-d9045308576c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.595 351492 INFO nova.compute.manager [-] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:16:22 compute-0 nova_compute[351485]: 2025-12-03 02:16:22.622 351492 DEBUG nova.compute.manager [None req-9a9120ac-29b9-4da1-b555-95e995a3bf85 - - - - - -] [instance: 07ce21e6-3627-467a-9b7e-d9045308576c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.7 KiB/s wr, 74 op/s
Dec  3 02:16:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:16:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:16:23 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.368 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.573 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:23 compute-0 nova_compute[351485]: 2025-12-03 02:16:23.718 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:23 compute-0 ovn_controller[89134]: 2025-12-03T02:16:23Z|00100|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:16:23 compute-0 ovn_controller[89134]: 2025-12-03T02:16:23Z|00101|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec  3 02:16:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:24 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:24 compute-0 nova_compute[351485]: 2025-12-03 02:16:24.081 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev de133ea7-cfcb-4226-9d40-d42e848e99ec does not exist
Dec  3 02:16:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4ea57398-5710-469b-a3f8-e5e16c8088ed does not exist
Dec  3 02:16:24 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e76a8cd6-5974-4bbf-a12e-2de865ecd505 does not exist
Dec  3 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:16:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:16:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:16:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 481 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Dec  3 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:25 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.421412649 +0000 UTC m=+0.074077797 container create c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:16:25 compute-0 systemd[1]: Started libpod-conmon-c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd.scope.
Dec  3 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.399223481 +0000 UTC m=+0.051888609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:16:25 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.554194546 +0000 UTC m=+0.206859674 container init c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.56284126 +0000 UTC m=+0.215506368 container start c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:16:25 compute-0 podman[446543]: 2025-12-03 02:16:25.567510793 +0000 UTC m=+0.220175901 container attach c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:16:25 compute-0 practical_burnell[446559]: 167 167
Dec  3 02:16:25 compute-0 systemd[1]: libpod-c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd.scope: Deactivated successfully.
Dec  3 02:16:25 compute-0 podman[446564]: 2025-12-03 02:16:25.637144303 +0000 UTC m=+0.044993734 container died c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-686bd4bf68dea2958674db1033e71c8c987c6ea1ebf7a296918b08a53d0108e6-merged.mount: Deactivated successfully.
Dec  3 02:16:25 compute-0 podman[446564]: 2025-12-03 02:16:25.699929009 +0000 UTC m=+0.107778370 container remove c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 02:16:25 compute-0 systemd[1]: libpod-conmon-c8940cdc874bcd10e5beed0e0d0065a21215dbafa726359ad27b2eebf2ae60cd.scope: Deactivated successfully.
Dec  3 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:25.92898582 +0000 UTC m=+0.043549933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.266261382 +0000 UTC m=+0.380825475 container create c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:16:26 compute-0 systemd[1]: Started libpod-conmon-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope.
Dec  3 02:16:26 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.44571473 +0000 UTC m=+0.560278803 container init c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.47402064 +0000 UTC m=+0.588584743 container start c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 02:16:26 compute-0 podman[446585]: 2025-12-03 02:16:26.480746251 +0000 UTC m=+0.595310374 container attach c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 02:16:26 compute-0 nova_compute[351485]: 2025-12-03 02:16:26.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Dec  3 02:16:27 compute-0 upbeat_wozniak[446601]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:16:27 compute-0 upbeat_wozniak[446601]: --> relative data size: 1.0
Dec  3 02:16:27 compute-0 upbeat_wozniak[446601]: --> All data devices are unavailable
Dec  3 02:16:27 compute-0 systemd[1]: libpod-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope: Deactivated successfully.
Dec  3 02:16:27 compute-0 systemd[1]: libpod-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope: Consumed 1.102s CPU time.
Dec  3 02:16:27 compute-0 podman[446631]: 2025-12-03 02:16:27.751075422 +0000 UTC m=+0.045790276 container died c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbdc32d6cdd1080aa36461dcc28275769709cc7bd13a78e4a8ab9d404ddff84f-merged.mount: Deactivated successfully.
Dec  3 02:16:27 compute-0 podman[446631]: 2025-12-03 02:16:27.846239455 +0000 UTC m=+0.140954269 container remove c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wozniak, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:16:27 compute-0 systemd[1]: libpod-conmon-c226a8524eb8c3b6ad6a82e2e55800c8df7de7b0aaf25b7d94f5f7a6ab73d977.scope: Deactivated successfully.
Dec  3 02:16:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.287479) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188287521, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1274, "num_deletes": 256, "total_data_size": 1853510, "memory_usage": 1878144, "flush_reason": "Manual Compaction"}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188298897, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1824226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36818, "largest_seqno": 38091, "table_properties": {"data_size": 1818182, "index_size": 3311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12865, "raw_average_key_size": 19, "raw_value_size": 1805901, "raw_average_value_size": 2752, "num_data_blocks": 148, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728068, "oldest_key_time": 1764728068, "file_creation_time": 1764728188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 11468 microseconds, and 5195 cpu microseconds.
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.298949) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1824226 bytes OK
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.298966) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.300786) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.300801) EVENT_LOG_v1 {"time_micros": 1764728188300796, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.300817) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1847720, prev total WAL file size 1847720, number of live WAL files 2.
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.301846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1781KB)], [83(8712KB)]
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188301951, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10746005, "oldest_snapshot_seqno": -1}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5808 keys, 10638446 bytes, temperature: kUnknown
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188379681, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10638446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10597472, "index_size": 25376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147051, "raw_average_key_size": 25, "raw_value_size": 10490322, "raw_average_value_size": 1806, "num_data_blocks": 1046, "num_entries": 5808, "num_filter_entries": 5808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.379902) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10638446 bytes
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.381291) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.1 rd, 136.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(11.7) write-amplify(5.8) OK, records in: 6336, records dropped: 528 output_compression: NoCompression
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.381307) EVENT_LOG_v1 {"time_micros": 1764728188381299, "job": 48, "event": "compaction_finished", "compaction_time_micros": 77808, "compaction_time_cpu_micros": 26225, "output_level": 6, "num_output_files": 1, "total_output_size": 10638446, "num_input_records": 6336, "num_output_records": 5808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188381794, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728188383519, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.301558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:16:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:16:28.383694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:16:28
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.log', 'images', '.mgr', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec  3 02:16:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.575 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.677 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728173.6760063, 5c870f25-6c33-4e95-b540-5a806454f556 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.677 351492 INFO nova.compute.manager [-] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.705 351492 DEBUG nova.compute.manager [None req-f38ade89-d080-4331-813f-bc37ef2c9be0 - - - - - -] [instance: 5c870f25-6c33-4e95-b540-5a806454f556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:28 compute-0 nova_compute[351485]: 2025-12-03 02:16:28.720 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.812015288 +0000 UTC m=+0.057183518 container create d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.78875277 +0000 UTC m=+0.033921040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:16:28 compute-0 systemd[1]: Started libpod-conmon-d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c.scope.
Dec  3 02:16:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.968701212 +0000 UTC m=+0.213869442 container init d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.993257546 +0000 UTC m=+0.238425756 container start d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:16:28 compute-0 podman[446780]: 2025-12-03 02:16:28.997316261 +0000 UTC m=+0.242484471 container attach d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:16:28 compute-0 elastic_lalande[446797]: 167 167
Dec  3 02:16:29 compute-0 systemd[1]: libpod-d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c.scope: Deactivated successfully.
Dec  3 02:16:29 compute-0 podman[446780]: 2025-12-03 02:16:29.002629872 +0000 UTC m=+0.247798122 container died d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 150 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec  3 02:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-073abeb732d6049d7e54abf61dddf4a852f122ff213b0cc0245be9ffb445336d-merged.mount: Deactivated successfully.
Dec  3 02:16:29 compute-0 podman[446780]: 2025-12-03 02:16:29.081486403 +0000 UTC m=+0.326654643 container remove d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:16:29 compute-0 systemd[1]: libpod-conmon-d4b9860229fa06a5116cb6d98b7bcb449e193b5b53159484306119203d602f9c.scope: Deactivated successfully.
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:16:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.322624345 +0000 UTC m=+0.073742457 container create 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.300719275 +0000 UTC m=+0.051837407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:16:29 compute-0 systemd[1]: Started libpod-conmon-64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa.scope.
Dec  3 02:16:29 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.43094566 +0000 UTC m=+0.182063842 container init 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.447582321 +0000 UTC m=+0.198700433 container start 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:16:29 compute-0 podman[446819]: 2025-12-03 02:16:29.451271085 +0000 UTC m=+0.202389287 container attach 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:16:29 compute-0 podman[158098]: time="2025-12-03T02:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46609 "" "Go-http-client/1.1"
Dec  3 02:16:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9529 "" "Go-http-client/1.1"
Dec  3 02:16:30 compute-0 ovn_controller[89134]: 2025-12-03T02:16:30Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:b3:fa 10.100.0.3
Dec  3 02:16:30 compute-0 ovn_controller[89134]: 2025-12-03T02:16:30Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:b3:fa 10.100.0.3
Dec  3 02:16:30 compute-0 frosty_wu[446836]: {
Dec  3 02:16:30 compute-0 frosty_wu[446836]:    "0": [
Dec  3 02:16:30 compute-0 frosty_wu[446836]:        {
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "devices": [
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "/dev/loop3"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            ],
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_name": "ceph_lv0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_size": "21470642176",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "name": "ceph_lv0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "tags": {
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cluster_name": "ceph",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.crush_device_class": "",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.encrypted": "0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osd_id": "0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.type": "block",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.vdo": "0"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            },
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "type": "block",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "vg_name": "ceph_vg0"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:        }
Dec  3 02:16:30 compute-0 frosty_wu[446836]:    ],
Dec  3 02:16:30 compute-0 frosty_wu[446836]:    "1": [
Dec  3 02:16:30 compute-0 frosty_wu[446836]:        {
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "devices": [
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "/dev/loop4"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            ],
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_name": "ceph_lv1",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_size": "21470642176",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "name": "ceph_lv1",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "tags": {
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cluster_name": "ceph",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.crush_device_class": "",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.encrypted": "0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osd_id": "1",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.type": "block",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.vdo": "0"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            },
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "type": "block",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "vg_name": "ceph_vg1"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:        }
Dec  3 02:16:30 compute-0 frosty_wu[446836]:    ],
Dec  3 02:16:30 compute-0 frosty_wu[446836]:    "2": [
Dec  3 02:16:30 compute-0 frosty_wu[446836]:        {
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "devices": [
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "/dev/loop5"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            ],
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_name": "ceph_lv2",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_size": "21470642176",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "name": "ceph_lv2",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "tags": {
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.cluster_name": "ceph",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.crush_device_class": "",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.encrypted": "0",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osd_id": "2",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.type": "block",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:                "ceph.vdo": "0"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            },
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "type": "block",
Dec  3 02:16:30 compute-0 frosty_wu[446836]:            "vg_name": "ceph_vg2"
Dec  3 02:16:30 compute-0 frosty_wu[446836]:        }
Dec  3 02:16:30 compute-0 frosty_wu[446836]:    ]
Dec  3 02:16:30 compute-0 frosty_wu[446836]: }
Dec  3 02:16:30 compute-0 systemd[1]: libpod-64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa.scope: Deactivated successfully.
Dec  3 02:16:30 compute-0 podman[446819]: 2025-12-03 02:16:30.345403693 +0000 UTC m=+1.096521815 container died 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:16:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-33d6dbc74cb911b9bf0a1963002478468147066b943e16dbc50182a9feb6115f-merged.mount: Deactivated successfully.
Dec  3 02:16:30 compute-0 podman[446819]: 2025-12-03 02:16:30.422851904 +0000 UTC m=+1.173970016 container remove 64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 02:16:30 compute-0 systemd[1]: libpod-conmon-64c30bba8b4d5b9eb8afc0b368018d082dd34b592f2dd2866022356a6c8ebcaa.scope: Deactivated successfully.
Dec  3 02:16:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 169 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Dec  3 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.412461023 +0000 UTC m=+0.100147154 container create d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:16:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:16:31 compute-0 openstack_network_exporter[368278]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:16:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.384460171 +0000 UTC m=+0.072146352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:16:31 compute-0 systemd[1]: Started libpod-conmon-d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741.scope.
Dec  3 02:16:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.538340555 +0000 UTC m=+0.226026686 container init d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.558088903 +0000 UTC m=+0.245775044 container start d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 02:16:31 compute-0 charming_haslett[447009]: 167 167
Dec  3 02:16:31 compute-0 systemd[1]: libpod-d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741.scope: Deactivated successfully.
Dec  3 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.64881032 +0000 UTC m=+0.336496431 container attach d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.651225899 +0000 UTC m=+0.338912020 container died d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8cece939bae6a2a03d7fabce3f276c28b6b9e7ccc9772a179328ee9c08dcedc-merged.mount: Deactivated successfully.
Dec  3 02:16:31 compute-0 podman[446993]: 2025-12-03 02:16:31.815039133 +0000 UTC m=+0.502725234 container remove d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 02:16:31 compute-0 systemd[1]: libpod-conmon-d2f8813cdfcbd50db01360f65cbfc2c96cb418d986819cf2d04d586182cf2741.scope: Deactivated successfully.
Dec  3 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.189210549 +0000 UTC m=+0.146563507 container create 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.103325359 +0000 UTC m=+0.060678317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:16:32 compute-0 systemd[1]: Started libpod-conmon-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope.
Dec  3 02:16:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.352043816 +0000 UTC m=+0.309396774 container init 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.375142259 +0000 UTC m=+0.332495207 container start 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:16:32 compute-0 podman[447032]: 2025-12-03 02:16:32.38224403 +0000 UTC m=+0.339596958 container attach 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.871 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.873 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.891 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.975 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.976 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.989 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:16:32 compute-0 nova_compute[351485]: 2025-12-03 02:16:32.990 351492 INFO nova.compute.claims [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:16:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 181 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.161 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:33 compute-0 epic_beaver[447048]: {
Dec  3 02:16:33 compute-0 epic_beaver[447048]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "osd_id": 2,
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "type": "bluestore"
Dec  3 02:16:33 compute-0 epic_beaver[447048]:    },
Dec  3 02:16:33 compute-0 epic_beaver[447048]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "osd_id": 1,
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "type": "bluestore"
Dec  3 02:16:33 compute-0 epic_beaver[447048]:    },
Dec  3 02:16:33 compute-0 epic_beaver[447048]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "osd_id": 0,
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:16:33 compute-0 epic_beaver[447048]:        "type": "bluestore"
Dec  3 02:16:33 compute-0 epic_beaver[447048]:    }
Dec  3 02:16:33 compute-0 epic_beaver[447048]: }
Dec  3 02:16:33 compute-0 systemd[1]: libpod-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope: Deactivated successfully.
Dec  3 02:16:33 compute-0 systemd[1]: libpod-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope: Consumed 1.254s CPU time.
Dec  3 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:16:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788056308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.722 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:33 compute-0 podman[447101]: 2025-12-03 02:16:33.738070881 +0000 UTC m=+0.053852325 container died 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.752 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.768 351492 DEBUG nova.compute.provider_tree [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1426e4c0be10293d1a70903179b120f8e7bf214ae9979deca55ddf7549aa88b6-merged.mount: Deactivated successfully.
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.803 351492 DEBUG nova.scheduler.client.report [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:16:33 compute-0 podman[447101]: 2025-12-03 02:16:33.806922969 +0000 UTC m=+0.122704413 container remove 36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_beaver, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  3 02:16:33 compute-0 systemd[1]: libpod-conmon-36486b654e3a06d5187d76b59d74de27111bc081935bfcd26e4ea058c92fc120.scope: Deactivated successfully.
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.839 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.842 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:16:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:16:33 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev df2015b6-4aa0-4c38-9dff-a2cb051640d8 does not exist
Dec  3 02:16:33 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 99152673-61f1-4ef0-8ac1-18e4a3c7a5f6 does not exist
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.912 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.912 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.952 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:16:33 compute-0 nova_compute[351485]: 2025-12-03 02:16:33.990 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.221 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.223 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.223 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Creating image(s)#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.266 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.309 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:16:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.364 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.373 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.419 351492 DEBUG nova.policy [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abdbefadac2a4d98bd33ed8a1a60ff75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.462 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.463 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.465 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.465 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.506 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:16:34 compute-0 nova_compute[351485]: 2025-12-03 02:16:34.520 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.006 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 190 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.8 MiB/s wr, 66 op/s
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.159 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] resizing rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.366 351492 DEBUG nova.objects.instance [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'migration_context' on Instance uuid 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.388 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.389 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Ensure instance console log exists: /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.389 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.390 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.391 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:35 compute-0 nova_compute[351485]: 2025-12-03 02:16:35.868 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Successfully created port: ae5db7e6-7a7a-4116-954a-be851ee02864 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:16:36 compute-0 ovn_controller[89134]: 2025-12-03T02:16:36Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:dd:2f 10.100.0.9
Dec  3 02:16:36 compute-0 ovn_controller[89134]: 2025-12-03T02:16:36Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:dd:2f 10.100.0.9
Dec  3 02:16:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 242 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 5.6 MiB/s wr, 129 op/s
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.284 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Successfully updated port: ae5db7e6-7a7a-4116-954a-be851ee02864 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.321 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.322 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.322 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.489 351492 DEBUG nova.compute.manager [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.490 351492 DEBUG nova.compute.manager [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing instance network info cache due to event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.490 351492 DEBUG oslo_concurrency.lockutils [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:37 compute-0 nova_compute[351485]: 2025-12-03 02:16:37.723 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:16:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:38 compute-0 nova_compute[351485]: 2025-12-03 02:16:38.583 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:38 compute-0 nova_compute[351485]: 2025-12-03 02:16:38.725 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001768203221657876 of space, bias 1.0, pg target 0.5304609664973627 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:16:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:16:38 compute-0 podman[447336]: 2025-12-03 02:16:38.878119607 +0000 UTC m=+0.120023087 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:16:38 compute-0 podman[447334]: 2025-12-03 02:16:38.910916395 +0000 UTC m=+0.152157136 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec  3 02:16:38 compute-0 podman[447335]: 2025-12-03 02:16:38.915865555 +0000 UTC m=+0.157139027 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 02:16:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 242 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 5.6 MiB/s wr, 129 op/s
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.210 351492 DEBUG nova.network.neutron [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.234 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.235 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance network_info: |[{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.235 351492 DEBUG oslo_concurrency.lockutils [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.235 351492 DEBUG nova.network.neutron [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.238 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start _get_guest_xml network_info=[{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.256 351492 WARNING nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.264 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.264 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.275 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.276 351492 DEBUG nova.virt.libvirt.host [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.276 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.277 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.277 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.277 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.278 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.279 351492 DEBUG nova.virt.hardware [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.282 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:16:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921637875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.811 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.860 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:16:39 compute-0 nova_compute[351485]: 2025-12-03 02:16:39.870 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:16:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661874518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.346 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.350 351492 DEBUG nova.virt.libvirt.vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:16:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141861820',display_name='tempest-TestNetworkBasicOps-server-2141861820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141861820',id=10,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDI3XAJe/oWUFcBwASHQKy1+64OXjmmyB8m7y5N7HAPNoYJg/K1iQtuEUIT2NyhA+m3otLmx2JBqvfSdTGVgxCze3o124/xouvwXfOAKv+FU1Zz518hn/q6Xt9p0SK00+w==',key_name='tempest-TestNetworkBasicOps-1925623369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-90hgdj1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:16:34Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.352 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.355 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.358 351492 DEBUG nova.objects.instance [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.378 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <uuid>8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592</uuid>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <name>instance-0000000a</name>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <nova:name>tempest-TestNetworkBasicOps-server-2141861820</nova:name>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:16:39</nova:creationTime>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:user uuid="abdbefadac2a4d98bd33ed8a1a60ff75">tempest-TestNetworkBasicOps-1039072813-project-member</nova:user>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:project uuid="f8f8e5d142604e8c8aabf1e14a1467ca">tempest-TestNetworkBasicOps-1039072813</nova:project>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <nova:port uuid="ae5db7e6-7a7a-4116-954a-be851ee02864">
Dec  3 02:16:40 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <system>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <entry name="serial">8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592</entry>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <entry name="uuid">8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592</entry>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </system>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <os>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  </os>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <features>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  </features>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk">
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      </source>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config">
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      </source>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:16:40 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:ed:5c:3e"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <target dev="tapae5db7e6-7a"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/console.log" append="off"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <video>
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </video>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:16:40 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:16:40 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:16:40 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:16:40 compute-0 nova_compute[351485]: </domain>
Dec  3 02:16:40 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.379 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Preparing to wait for external event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.380 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.380 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.381 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.382 351492 DEBUG nova.virt.libvirt.vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:16:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141861820',display_name='tempest-TestNetworkBasicOps-server-2141861820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141861820',id=10,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDI3XAJe/oWUFcBwASHQKy1+64OXjmmyB8m7y5N7HAPNoYJg/K1iQtuEUIT2NyhA+m3otLmx2JBqvfSdTGVgxCze3o124/xouvwXfOAKv+FU1Zz518hn/q6Xt9p0SK00+w==',key_name='tempest-TestNetworkBasicOps-1925623369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-90hgdj1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:16:34Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.382 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.383 351492 DEBUG nova.network.os_vif_util [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.384 351492 DEBUG os_vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.385 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.385 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.386 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.391 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.392 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae5db7e6-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.392 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae5db7e6-7a, col_values=(('external_ids', {'iface-id': 'ae5db7e6-7a7a-4116-954a-be851ee02864', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:5c:3e', 'vm-uuid': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.397 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:40 compute-0 NetworkManager[48912]: <info>  [1764728200.3982] manager: (tapae5db7e6-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.401 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.408 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.410 351492 INFO os_vif [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a')#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.495 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.495 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.496 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No VIF found with MAC fa:16:3e:ed:5c:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.497 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Using config drive#033[00m
Dec  3 02:16:40 compute-0 nova_compute[351485]: 2025-12-03 02:16:40.550 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:16:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 702 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Dec  3 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.785 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Creating config drive at /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config#033[00m
Dec  3 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.796 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kfdv5sr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.833 351492 DEBUG nova.network.neutron [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updated VIF entry in instance network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.835 351492 DEBUG nova.network.neutron [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.936 351492 DEBUG oslo_concurrency.lockutils [req-596bd03c-fdc1-41c1-ab82-31f2872d2757 req-7abf5376-3fac-463b-bfa5-a6144235fa62 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:41 compute-0 nova_compute[351485]: 2025-12-03 02:16:41.947 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kfdv5sr" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.009 351492 DEBUG nova.storage.rbd_utils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.022 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.039 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.046 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.323 351492 DEBUG oslo_concurrency.processutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.325 351492 INFO nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deleting local config drive /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.config because it was imported into RBD.#033[00m
Dec  3 02:16:42 compute-0 kernel: tapae5db7e6-7a: entered promiscuous mode
Dec  3 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.4378] manager: (tapae5db7e6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Dec  3 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00102|binding|INFO|Claiming lport ae5db7e6-7a7a-4116-954a-be851ee02864 for this chassis.
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.442 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00103|binding|INFO|ae5db7e6-7a7a-4116-954a-be851ee02864: Claiming fa:16:3e:ed:5c:3e 10.100.0.3
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.454 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.456 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 bound to our chassis#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.460 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed008f09-da46-4507-9be2-7398a4728121#033[00m
Dec  3 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00104|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 ovn-installed in OVS
Dec  3 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00105|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 up in Southbound
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.481 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b97f6b76-7e7c-4627-a32b-02a9432e0089]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.483 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped008f09-d1 in ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.483 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.486 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.485 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped008f09-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.485 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8eba9b-4d17-4c03-903c-f49ecebcdf2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.488 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[56be6659-0a82-4745-a904-99fca778c790]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 systemd-machined[138558]: New machine qemu-10-instance-0000000a.
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.503 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[b28415b9-38e9-4a4c-986d-e7dd35285ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.532 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a16e1d92-c362-4fed-b45d-20af6302c729]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 systemd-udevd[447536]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.5693] device (tapae5db7e6-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.570 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfdb57d-ef60-4206-a48e-c58e7132bb63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.5734] device (tapae5db7e6-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.577 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cb31e717-977a-46e1-ad1e-7deaac55c852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 systemd-udevd[447540]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.5796] manager: (taped008f09-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.615 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[b93fa313-ddd9-42c5-a0b0-55b1469e2f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.618 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ac591656-3ca7-4074-a075-aa8dd6724033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.6457] device (taped008f09-d0): carrier: link connected
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.652 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5e7e7d-0326-46c0-b562-dfc78ef68ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.675 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cbd505-eae3-4b3f-9144-2bdcfc7a8f21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447565, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.699 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0013dd0f-db32-40cb-baa1-6a6e85f82895]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:11a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704212, 'tstamp': 704212}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 447566, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.721 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[01c816aa-b6a0-45c9-b8c9-bac70d492e13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 447567, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.756 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c49f2058-a28b-488c-9f15-dd0c9e5c2c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.822 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4969a564-777c-48dc-b0fd-a48499c1eeb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.823 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.823 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.824 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped008f09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:42 compute-0 kernel: taped008f09-d0: entered promiscuous mode
Dec  3 02:16:42 compute-0 NetworkManager[48912]: <info>  [1764728202.8271] manager: (taped008f09-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.826 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.829 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.834 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped008f09-d0, col_values=(('external_ids', {'iface-id': '4fe53946-9a81-46d3-946d-3676da417bd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.836 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 ovn_controller[89134]: 2025-12-03T02:16:42Z|00106|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:16:42 compute-0 nova_compute[351485]: 2025-12-03 02:16:42.866 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.869 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed008f09-da46-4507-9be2-7398a4728121.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed008f09-da46-4507-9be2-7398a4728121.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.871 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d7e096-698f-4463-9411-6f0a86a57661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.873 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-ed008f09-da46-4507-9be2-7398a4728121
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/ed008f09-da46-4507-9be2-7398a4728121.pid.haproxy
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID ed008f09-da46-4507-9be2-7398a4728121
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:16:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:42.874 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'env', 'PROCESS_TAG=haproxy-ed008f09-da46-4507-9be2-7398a4728121', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed008f09-da46-4507-9be2-7398a4728121.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:16:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 520 KiB/s rd, 4.5 MiB/s wr, 118 op/s
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.202 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728203.2015908, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.203 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Started (Lifecycle Event)#033[00m
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.243 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.252 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728203.2220917, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.252 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.277 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.286 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:16:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.311 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.405786376 +0000 UTC m=+0.080667573 container create abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.362370278 +0000 UTC m=+0.037251495 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:16:43 compute-0 systemd[1]: Started libpod-conmon-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57.scope.
Dec  3 02:16:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6310634a2e9b69b7fec86a833550521f2d887dce434572f35b449a118a1fc6ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.536860115 +0000 UTC m=+0.211741282 container init abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:16:43 compute-0 podman[447640]: 2025-12-03 02:16:43.553051023 +0000 UTC m=+0.227932180 container start abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:16:43 compute-0 nova_compute[351485]: 2025-12-03 02:16:43.585 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:43 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : New worker (447661) forked
Dec  3 02:16:43 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : Loading success.
Dec  3 02:16:44 compute-0 podman[447670]: 2025-12-03 02:16:44.877499995 +0000 UTC m=+0.153537155 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:16:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 403 KiB/s rd, 4.0 MiB/s wr, 103 op/s
Dec  3 02:16:45 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:45.050 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:16:45 compute-0 nova_compute[351485]: 2025-12-03 02:16:45.400 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Dec  3 02:16:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:16:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160411010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:16:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:16:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160411010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:16:48 compute-0 nova_compute[351485]: 2025-12-03 02:16:48.203 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:48 compute-0 nova_compute[351485]: 2025-12-03 02:16:48.589 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 524 KiB/s wr, 35 op/s
Dec  3 02:16:49 compute-0 podman[447693]: 2025-12-03 02:16:49.862962097 +0000 UTC m=+0.115014325 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 02:16:49 compute-0 podman[447694]: 2025-12-03 02:16:49.869485312 +0000 UTC m=+0.105052683 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:16:49 compute-0 podman[447698]: 2025-12-03 02:16:49.876814719 +0000 UTC m=+0.108606903 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, config_id=edpm, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Dec  3 02:16:49 compute-0 podman[447703]: 2025-12-03 02:16:49.884493887 +0000 UTC m=+0.118031281 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:16:49 compute-0 podman[447692]: 2025-12-03 02:16:49.889788726 +0000 UTC m=+0.147143604 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.403 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.806 351492 DEBUG nova.compute.manager [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.807 351492 DEBUG oslo_concurrency.lockutils [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.809 351492 DEBUG oslo_concurrency.lockutils [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.810 351492 DEBUG oslo_concurrency.lockutils [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.811 351492 DEBUG nova.compute.manager [req-b1566333-1b4d-43e1-a41d-bcdd93797ad7 req-4ef927c2-487f-4f89-b32c-63052e17f0f7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Processing event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.813 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.820 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728210.8195055, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.822 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.826 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.833 351492 INFO nova.virt.libvirt.driver [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance spawned successfully.#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.833 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.853 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.869 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.874 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.875 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.876 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.877 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.878 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.879 351492 DEBUG nova.virt.libvirt.driver [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.891 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.977 351492 INFO nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 16.76 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:16:50 compute-0 nova_compute[351485]: 2025-12-03 02:16:50.978 351492 DEBUG nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:16:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 529 KiB/s wr, 36 op/s
Dec  3 02:16:51 compute-0 nova_compute[351485]: 2025-12-03 02:16:51.115 351492 INFO nova.compute.manager [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 18.17 seconds to build instance.#033[00m
Dec  3 02:16:51 compute-0 nova_compute[351485]: 2025-12-03 02:16:51.148 351492 DEBUG oslo_concurrency.lockutils [None req-ead2c054-9d37-4a59-b496-173d38106dd1 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.924 351492 DEBUG nova.compute.manager [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.925 351492 DEBUG oslo_concurrency.lockutils [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.926 351492 DEBUG oslo_concurrency.lockutils [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.927 351492 DEBUG oslo_concurrency.lockutils [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.928 351492 DEBUG nova.compute.manager [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:16:52 compute-0 nova_compute[351485]: 2025-12-03 02:16:52.929 351492 WARNING nova.compute.manager [req-242eca09-3da6-40c3-9f19-5602ee24c227 req-7e6215cc-bbb8-4ac7-b287-55adcf7f0bfb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state active and task_state None.#033[00m
Dec  3 02:16:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 29 KiB/s wr, 11 op/s
Dec  3 02:16:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:53 compute-0 nova_compute[351485]: 2025-12-03 02:16:53.532 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:53 compute-0 nova_compute[351485]: 2025-12-03 02:16:53.591 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 406 KiB/s rd, 28 KiB/s wr, 23 op/s
Dec  3 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.406 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.671 351492 DEBUG nova.compute.manager [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.672 351492 DEBUG nova.compute.manager [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing instance network info cache due to event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.673 351492 DEBUG oslo_concurrency.lockutils [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.673 351492 DEBUG oslo_concurrency.lockutils [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:16:55 compute-0 nova_compute[351485]: 2025-12-03 02:16:55.674 351492 DEBUG nova.network.neutron [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:16:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 29 KiB/s wr, 67 op/s
Dec  3 02:16:57 compute-0 nova_compute[351485]: 2025-12-03 02:16:57.828 351492 DEBUG nova.network.neutron [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updated VIF entry in instance network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:16:57 compute-0 nova_compute[351485]: 2025-12-03 02:16:57.829 351492 DEBUG nova.network.neutron [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:16:57 compute-0 nova_compute[351485]: 2025-12-03 02:16:57.852 351492 DEBUG oslo_concurrency.lockutils [req-f2665b51-bfba-4a44-beb0-12fb1f994f7d req-77b302cd-f6f1-4ba5-bd78-025adc2cabc3 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:16:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:16:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:16:58 compute-0 nova_compute[351485]: 2025-12-03 02:16:58.593 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:16:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.8 KiB/s wr, 60 op/s
Dec  3 02:16:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:59.648 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:16:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:59.649 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:16:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:16:59.650 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:16:59 compute-0 podman[158098]: time="2025-12-03T02:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46278 "" "Go-http-client/1.1"
Dec  3 02:16:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9587 "" "Go-http-client/1.1"
Dec  3 02:17:00 compute-0 nova_compute[351485]: 2025-12-03 02:17:00.411 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.8 KiB/s wr, 64 op/s
Dec  3 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:17:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:17:01 compute-0 openstack_network_exporter[368278]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:17:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:17:01 compute-0 nova_compute[351485]: 2025-12-03 02:17:01.873 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 64 op/s
Dec  3 02:17:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:03 compute-0 nova_compute[351485]: 2025-12-03 02:17:03.596 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec  3 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.416 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.746 351492 DEBUG nova.objects.instance [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'flavor' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.803 351492 DEBUG oslo_concurrency.lockutils [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:05 compute-0 nova_compute[351485]: 2025-12-03 02:17:05.804 351492 DEBUG oslo_concurrency.lockutils [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 KiB/s wr, 51 op/s
Dec  3 02:17:07 compute-0 nova_compute[351485]: 2025-12-03 02:17:07.869 351492 DEBUG nova.network.neutron [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.037 351492 DEBUG nova.compute.manager [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.038 351492 DEBUG nova.compute.manager [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.038 351492 DEBUG oslo_concurrency.lockutils [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.527 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:08 compute-0 nova_compute[351485]: 2025-12-03 02:17:08.601 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 2.4 KiB/s wr, 5 op/s
Dec  3 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.813 351492 DEBUG nova.network.neutron [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:09 compute-0 podman[447796]: 2025-12-03 02:17:09.837748722 +0000 UTC m=+0.088017691 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.848 351492 DEBUG oslo_concurrency.lockutils [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.848 351492 DEBUG nova.compute.manager [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  3 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.849 351492 DEBUG nova.compute.manager [None req-2af10689-3986-425b-97f3-87d84cbdfdec 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] network_info to inject: |[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  3 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.851 351492 DEBUG oslo_concurrency.lockutils [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:09 compute-0 nova_compute[351485]: 2025-12-03 02:17:09.851 351492 DEBUG nova.network.neutron [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:17:09 compute-0 podman[447794]: 2025-12-03 02:17:09.857426648 +0000 UTC m=+0.106209846 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  3 02:17:09 compute-0 podman[447795]: 2025-12-03 02:17:09.867785062 +0000 UTC m=+0.112681950 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  3 02:17:10 compute-0 nova_compute[351485]: 2025-12-03 02:17:10.422 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4.4 KiB/s wr, 5 op/s
Dec  3 02:17:11 compute-0 nova_compute[351485]: 2025-12-03 02:17:11.443 351492 DEBUG nova.objects.instance [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'flavor' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:11 compute-0 nova_compute[351485]: 2025-12-03 02:17:11.483 351492 DEBUG oslo_concurrency.lockutils [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.498 351492 DEBUG nova.network.neutron [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.499 351492 DEBUG nova.network.neutron [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.527 351492 DEBUG oslo_concurrency.lockutils [req-120044bd-e8c3-435e-9040-45776c293a57 req-228342ba-6595-4325-b3b1-11150899ef58 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.528 351492 DEBUG oslo_concurrency.lockutils [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:12 compute-0 nova_compute[351485]: 2025-12-03 02:17:12.581 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 4.4 KiB/s wr, 1 op/s
Dec  3 02:17:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.603 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.742 351492 DEBUG nova.network.neutron [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.946 351492 DEBUG nova.compute.manager [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.948 351492 DEBUG nova.compute.manager [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing instance network info cache due to event network-changed-b7fa8023-e50c-4bea-be79-8fbe005f0b8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:17:13 compute-0 nova_compute[351485]: 2025-12-03 02:17:13.949 351492 DEBUG oslo_concurrency.lockutils [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.4 KiB/s wr, 1 op/s
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.366 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.367 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.368 351492 INFO nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Rebooting instance#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.392 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.393 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.394 351492 DEBUG nova.network.neutron [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.432 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.744 351492 DEBUG nova.network.neutron [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.774 351492 DEBUG oslo_concurrency.lockutils [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.774 351492 DEBUG nova.compute.manager [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.775 351492 DEBUG nova.compute.manager [None req-4a929283-4bc4-4f7b-bbb7-a7bab86eb662 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] network_info to inject: |[{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.782 351492 DEBUG oslo_concurrency.lockutils [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:15 compute-0 nova_compute[351485]: 2025-12-03 02:17:15.783 351492 DEBUG nova.network.neutron [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Refreshing network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:17:15 compute-0 podman[447853]: 2025-12-03 02:17:15.861280824 +0000 UTC m=+0.120662725 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00107|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00108|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00109|binding|INFO|Releasing lport f4f388aa-0af5-4918-b8ad-5c74c22057c6 from this chassis (sb_readonly=0)
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.390 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.532 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.533 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.534 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.535 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.536 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.539 351492 INFO nova.compute.manager [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Terminating instance#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.542 351492 DEBUG nova.compute.manager [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.613 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.613 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:16 compute-0 kernel: tapb7fa8023-e5 (unregistering): left promiscuous mode
Dec  3 02:17:16 compute-0 NetworkManager[48912]: <info>  [1764728236.6888] device (tapb7fa8023-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.709 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00110|binding|INFO|Releasing lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a from this chassis (sb_readonly=0)
Dec  3 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00111|binding|INFO|Setting lport b7fa8023-e50c-4bea-be79-8fbe005f0b8a down in Southbound
Dec  3 02:17:16 compute-0 ovn_controller[89134]: 2025-12-03T02:17:16Z|00112|binding|INFO|Removing iface tapb7fa8023-e5 ovn-installed in OVS
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.730 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:b3:fa 10.100.0.3'], port_security=['fa:16:3e:12:b3:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4f50e501-f565-4e1f-aa02-df921702eff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9efdda7cf984595a9c5a855bae62b0e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '532f80d5-065d-43cb-9604-ad1c2a6e3902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=319776e3-1c91-4ec0-bfb2-2325dfaa1fa2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=b7fa8023-e50c-4bea-be79-8fbe005f0b8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.731 288528 INFO neutron.agent.ovn.metadata.agent [-] Port b7fa8023-e50c-4bea-be79-8fbe005f0b8a in datapath a5e23dc0-bcc2-406c-bc7f-b978295be94b unbound from our chassis#033[00m
Dec  3 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.733 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5e23dc0-bcc2-406c-bc7f-b978295be94b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.734 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9905fe-46dd-4e0f-951f-eb2837e32eab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:16.734 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b namespace which is not needed anymore#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.740 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:16 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  3 02:17:16 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 46.477s CPU time.
Dec  3 02:17:16 compute-0 systemd-machined[138558]: Machine qemu-6-instance-00000006 terminated.
Dec  3 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : haproxy version is 2.8.14-c23fe91
Dec  3 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [NOTICE]   (444481) : path to executable is /usr/sbin/haproxy
Dec  3 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [ALERT]    (444481) : Current worker (444494) exited with code 143 (Terminated)
Dec  3 02:17:16 compute-0 neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b[444426]: [WARNING]  (444481) : All workers exited. Exiting... (0)
Dec  3 02:17:16 compute-0 systemd[1]: libpod-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363.scope: Deactivated successfully.
Dec  3 02:17:16 compute-0 podman[447913]: 2025-12-03 02:17:16.973747679 +0000 UTC m=+0.076715321 container died 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.983 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:16 compute-0 nova_compute[351485]: 2025-12-03 02:17:16.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.000 351492 INFO nova.virt.libvirt.driver [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Instance destroyed successfully.#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.001 351492 DEBUG nova.objects.instance [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lazy-loading 'resources' on Instance uuid 4f50e501-f565-4e1f-aa02-df921702eff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d0f5e97a1c9cf6a7b1ce8133ccb65b7a2748d41d5e4c00f49714ed27a9e8b68-merged.mount: Deactivated successfully.
Dec  3 02:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363-userdata-shm.mount: Deactivated successfully.
Dec  3 02:17:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 24 KiB/s wr, 3 op/s
Dec  3 02:17:17 compute-0 podman[447913]: 2025-12-03 02:17:17.051098638 +0000 UTC m=+0.154066250 container cleanup 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.056 351492 DEBUG nova.virt.libvirt.vif [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1950125250',display_name='tempest-AttachInterfacesUnderV243Test-server-1950125250',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1950125250',id=6,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBB9OuHdIBdpYaktjGsefgccfH8R9SNK99mHHbJQ9rg+G2U1LTvmjO9Wsnt6ghp9uwnzyNl9odxW0s4EjHMYofeke7VnvOokwl4rSnaOh/gTQhB30j9Q5ponmvnWGOY9dA==',key_name='tempest-keypair-48380121',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9efdda7cf984595a9c5a855bae62b0e',ramdisk_id='',reservation_id='r-dnx5z6kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1651825730',owner_user_name='tempest-AttachInterfacesUnderV243Test-1651825730-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c7d81f1f9e4989b1eb8b8cf96bbf11',uuid=4f50e501-f565-4e1f-aa02-df921702eff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.057 351492 DEBUG nova.network.os_vif_util [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converting VIF {"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.058 351492 DEBUG nova.network.os_vif_util [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.058 351492 DEBUG os_vif [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.064 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.064 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7fa8023-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.069 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.070 351492 INFO os_vif [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:b3:fa,bridge_name='br-int',has_traffic_filtering=True,id=b7fa8023-e50c-4bea-be79-8fbe005f0b8a,network=Network(a5e23dc0-bcc2-406c-bc7f-b978295be94b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7fa8023-e5')#033[00m
Dec  3 02:17:17 compute-0 systemd[1]: libpod-conmon-1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363.scope: Deactivated successfully.
Dec  3 02:17:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:17:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341854959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.179 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:17 compute-0 podman[447949]: 2025-12-03 02:17:17.18158809 +0000 UTC m=+0.084284716 container remove 1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.192 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3cca0d28-288c-4823-87d7-57897f6b91f6]: (4, ('Wed Dec  3 02:17:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b (1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363)\n1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363\nWed Dec  3 02:17:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b (1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363)\n1850961de0e79545d5e6096d2e1507ace37214bae370e4c395b25878f1ca1363\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.195 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[470c14fe-11f1-4eb5-b3c2-3b9c79758fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.196 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5e23dc0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:17 compute-0 kernel: tapa5e23dc0-b0: left promiscuous mode
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.219 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.222 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[32dce629-196f-43ac-89ec-d507fe95db57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.226 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.236 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[028bb779-cb36-41a2-9e5f-c787e26a851d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.238 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5034d6fb-2c7d-4a93-aca0-033c6ed8c3ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.262 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[df75a8ae-797f-4051-84ab-af23b56fcc96]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698614, 'reachable_time': 15469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447985, 'error': None, 'target': 'ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.268 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a5e23dc0-bcc2-406c-bc7f-b978295be94b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:17:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:17.268 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[c813d9b1-76ba-4ee1-a098-4bec661c05d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:17 compute-0 systemd[1]: run-netns-ovnmeta\x2da5e23dc0\x2dbcc2\x2d406c\x2dbc7f\x2db978295be94b.mount: Deactivated successfully.
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.305 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.305 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.326 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.326 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.335 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.335 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.883 351492 INFO nova.virt.libvirt.driver [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deleting instance files /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9_del#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.884 351492 INFO nova.virt.libvirt.driver [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deletion of /var/lib/nova/instances/4f50e501-f565-4e1f-aa02-df921702eff9_del complete#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.900 351492 DEBUG nova.network.neutron [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.908 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.909 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3668MB free_disk=59.876190185546875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.909 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.909 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.946 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.948 351492 DEBUG nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.990 351492 INFO nova.compute.manager [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 1.45 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.990 351492 DEBUG oslo.service.loopingcall [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.990 351492 DEBUG nova.compute.manager [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:17:17 compute-0 nova_compute[351485]: 2025-12-03 02:17:17.991 351492 DEBUG nova.network.neutron [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:17:18 compute-0 kernel: tapee5c2dfc-04 (unregistering): left promiscuous mode
Dec  3 02:17:18 compute-0 NetworkManager[48912]: <info>  [1764728238.1712] device (tapee5c2dfc-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.172 351492 DEBUG nova.network.neutron [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updated VIF entry in instance network info cache for port b7fa8023-e50c-4bea-be79-8fbe005f0b8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.173 351492 DEBUG nova.network.neutron [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [{"id": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "address": "fa:16:3e:12:b3:fa", "network": {"id": "a5e23dc0-bcc2-406c-bc7f-b978295be94b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1951903174-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9efdda7cf984595a9c5a855bae62b0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7fa8023-e5", "ovs_interfaceid": "b7fa8023-e50c-4bea-be79-8fbe005f0b8a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.177 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4f50e501-f565-4e1f-aa02-df921702eff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.181 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance a48b4084-369d-432a-9f47-9378cdcc011f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.181 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.182 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.182 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:17:18 compute-0 ovn_controller[89134]: 2025-12-03T02:17:18Z|00113|binding|INFO|Releasing lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 from this chassis (sb_readonly=0)
Dec  3 02:17:18 compute-0 ovn_controller[89134]: 2025-12-03T02:17:18Z|00114|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 down in Southbound
Dec  3 02:17:18 compute-0 ovn_controller[89134]: 2025-12-03T02:17:18Z|00115|binding|INFO|Removing iface tapee5c2dfc-04 ovn-installed in OVS
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.190 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.198 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.199 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a unbound from our chassis#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.202 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fdf214a-0f6e-4e5d-b449-e1988827937a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.203 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[159cedb3-a45b-4205-aca9-f3a07247ecc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.203 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace which is not needed anymore#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.207 351492 DEBUG oslo_concurrency.lockutils [req-d3eefc5c-134d-4313-b6b5-c2093f2ce7a6 req-5bb08c4d-f295-4d45-93de-0a3351aa2306 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4f50e501-f565-4e1f-aa02-df921702eff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  3 02:17:18 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 44.035s CPU time.
Dec  3 02:17:18 compute-0 systemd-machined[138558]: Machine qemu-8-instance-00000008 terminated.
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.270 351492 DEBUG nova.compute.manager [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-unplugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.271 351492 DEBUG oslo_concurrency.lockutils [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.272 351492 DEBUG oslo_concurrency.lockutils [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.272 351492 DEBUG oslo_concurrency.lockutils [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.273 351492 DEBUG nova.compute.manager [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] No waiting events found dispatching network-vif-unplugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.273 351492 DEBUG nova.compute.manager [req-fabdad7d-7d79-45a1-9bad-cf39ce03bd47 req-5a3586c6-76bf-4741-9412-2a1183db59c4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-unplugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.294 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.308 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.321 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance destroyed successfully.#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.322 351492 DEBUG nova.objects.instance [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'resources' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.347 351492 DEBUG nova.virt.libvirt.vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.348 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.349 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.349 351492 DEBUG os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.352 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee5c2dfc-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.358 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.361 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.368 351492 INFO os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.378 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start _get_guest_xml network_info=[{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.388 351492 WARNING nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.396 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.397 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.407 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.408 351492 DEBUG nova.virt.libvirt.host [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.409 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.410 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.411 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.412 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.413 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.413 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.414 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.415 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.415 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.416 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.417 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.417 351492 DEBUG nova.virt.hardware [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.418 351492 DEBUG nova.objects.instance [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.441 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : haproxy version is 2.8.14-c23fe91
Dec  3 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [NOTICE]   (445216) : path to executable is /usr/sbin/haproxy
Dec  3 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [ALERT]    (445216) : Current worker (445218) exited with code 143 (Terminated)
Dec  3 02:17:18 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[445211]: [WARNING]  (445216) : All workers exited. Exiting... (0)
Dec  3 02:17:18 compute-0 systemd[1]: libpod-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5.scope: Deactivated successfully.
Dec  3 02:17:18 compute-0 podman[448015]: 2025-12-03 02:17:18.459879216 +0000 UTC m=+0.080399265 container died a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.497 351492 DEBUG nova.compute.manager [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.502 351492 DEBUG oslo_concurrency.lockutils [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.502 351492 DEBUG oslo_concurrency.lockutils [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.502 351492 DEBUG oslo_concurrency.lockutils [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-087efa0144787524a70b8446fc5a09fbd51303045924a94f4a2b128c2b8cbdbc-merged.mount: Deactivated successfully.
Dec  3 02:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5-userdata-shm.mount: Deactivated successfully.
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.503 351492 DEBUG nova.compute.manager [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.503 351492 WARNING nova.compute.manager [req-7a5a4454-ce20-4b6c-b061-e4f4998294ac req-d34d0cc2-1b13-465c-8b07-861baa8fb9b9 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.506 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:18 compute-0 podman[448015]: 2025-12-03 02:17:18.529308341 +0000 UTC m=+0.149828370 container cleanup a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:17:18 compute-0 systemd[1]: libpod-conmon-a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5.scope: Deactivated successfully.
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.603 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 podman[448046]: 2025-12-03 02:17:18.635124504 +0000 UTC m=+0.067747087 container remove a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.649 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b2830a-2a69-45fe-8782-5853219c3ae6]: (4, ('Wed Dec  3 02:17:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5)\na7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5\nWed Dec  3 02:17:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (a7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5)\na7e32c6b2ec711ff4952d75dd39991677c8777498e40fcc11f90542a51cdecf5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.651 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f70a84a3-68a3-473c-ad95-91bf45d5bb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.653 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.658 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 kernel: tap2fdf214a-00: left promiscuous mode
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.680 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[86f3b1e9-8c1f-4de9-a34c-5e68b52233c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.681 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.698 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[098c5367-900c-4424-92f8-01276ce39be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.701 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac126ba-6778-4ef7-ad72-753278ed7506]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.718 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[644b900c-f060-4f4e-bbe5-6234938607da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699295, 'reachable_time': 28728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448097, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fdf214a\x2d0f6e\x2d4e5d\x2db449\x2de1988827937a.mount: Deactivated successfully.
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.724 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:17:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:18.724 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[85713137-db5a-4f94-ba94-6cc9897baca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:17:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913698756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:17:18 compute-0 nova_compute[351485]: 2025-12-03 02:17:18.977 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:17:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174846856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.039 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 3 op/s
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.051 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.080 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.100 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.139 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.139 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.511 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.512 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.512 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.520 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 02:17:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:19.522 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 02:17:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:17:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829905297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.560 351492 DEBUG oslo_concurrency.processutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.561 351492 DEBUG nova.virt.libvirt.vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.561 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.562 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.563 351492 DEBUG nova.objects.instance [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.584 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <uuid>a48b4084-369d-432a-9f47-9378cdcc011f</uuid>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <name>instance-00000008</name>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <nova:name>tempest-ServerActionsTestJSON-server-925455337</nova:name>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:17:18</nova:creationTime>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:user uuid="292dd1da4e67424b855327b32f0623b7">tempest-ServerActionsTestJSON-225723275-project-member</nova:user>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:project uuid="b95bb4c57d3543acb25997bedee9dec3">tempest-ServerActionsTestJSON-225723275</nova:project>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <nova:port uuid="ee5c2dfc-04c3-400a-8073-6f2c65dcea03">
Dec  3 02:17:19 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <system>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <entry name="serial">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <entry name="uuid">a48b4084-369d-432a-9f47-9378cdcc011f</entry>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </system>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <os>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  </os>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <features>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  </features>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk">
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      </source>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/a48b4084-369d-432a-9f47-9378cdcc011f_disk.config">
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      </source>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:17:19 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:ff:dd:2f"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <target dev="tapee5c2dfc-04"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f/console.log" append="off"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <video>
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </video>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <input type="keyboard" bus="usb"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:17:19 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:17:19 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:17:19 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:17:19 compute-0 nova_compute[351485]: </domain>
Dec  3 02:17:19 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.585 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.586 351492 DEBUG nova.virt.libvirt.driver [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.587 351492 DEBUG nova.virt.libvirt.vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.587 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.588 351492 DEBUG nova.network.os_vif_util [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.588 351492 DEBUG os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.588 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.589 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.589 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.594 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.594 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee5c2dfc-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.595 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee5c2dfc-04, col_values=(('external_ids', {'iface-id': 'ee5c2dfc-04c3-400a-8073-6f2c65dcea03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:dd:2f', 'vm-uuid': 'a48b4084-369d-432a-9f47-9378cdcc011f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.5989] manager: (tapee5c2dfc-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.597 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.599 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.606 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.607 351492 INFO os_vif [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')#033[00m
Dec  3 02:17:19 compute-0 kernel: tapee5c2dfc-04: entered promiscuous mode
Dec  3 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.7476] manager: (tapee5c2dfc-04): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Dec  3 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00116|binding|INFO|Claiming lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for this chassis.
Dec  3 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00117|binding|INFO|ee5c2dfc-04c3-400a-8073-6f2c65dcea03: Claiming fa:16:3e:ff:dd:2f 10.100.0.9
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.751 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.759 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.762 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a bound to our chassis#033[00m
Dec  3 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00118|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 ovn-installed in OVS
Dec  3 02:17:19 compute-0 ovn_controller[89134]: 2025-12-03T02:17:19Z|00119|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 up in Southbound
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.771 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.772 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.767 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fdf214a-0f6e-4e5d-b449-e1988827937a#033[00m
Dec  3 02:17:19 compute-0 nova_compute[351485]: 2025-12-03 02:17:19.788 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.788 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2b313f-5c55-4a28-bcb1-ea3a20c1a8b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.791 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fdf214a-01 in ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.794 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fdf214a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.795 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a1707023-8fea-47b0-97e9-3b3cc19f73b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.796 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[bac3cb28-9f75-4aa5-b3c7-953bed4bb5d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.814 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[47395e40-f035-4e91-8147-1473a4e169a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 systemd-udevd[448160]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.8354] device (tapee5c2dfc-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.8367] device (tapee5c2dfc-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:17:19 compute-0 systemd-machined[138558]: New machine qemu-11-instance-00000008.
Dec  3 02:17:19 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000008.
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.851 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[36df3c71-1bfd-4225-ad7f-e9b8d3ebacd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.904 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c856f297-a283-428e-874c-41e2c381b374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.922 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[77b3cc1a-2ddc-45dc-b802-8d6f4c2b4cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 systemd-udevd[448164]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:17:19 compute-0 NetworkManager[48912]: <info>  [1764728239.9358] manager: (tap2fdf214a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.969 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac6955a-5699-4785-a576-84587d62be71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:19.976 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[04eb58fc-7482-4a16-81d0-4b2c267d80ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 NetworkManager[48912]: <info>  [1764728240.0271] device (tap2fdf214a-00): carrier: link connected
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.032 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[efb9d8e2-6564-4aed-843a-a938f0b60204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.054 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d33ddf24-bdbc-44bb-93ec-7abb6b314a0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707950, 'reachable_time': 21616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448253, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 podman[448168]: 2025-12-03 02:17:20.075799385 +0000 UTC m=+0.145989262 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:17:20 compute-0 podman[448182]: 2025-12-03 02:17:20.077638237 +0000 UTC m=+0.125115331 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.076 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[71d6c632-8b2a-4f92-93fe-2999375ef582]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:62d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707950, 'tstamp': 707950}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448276, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.097 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a1cb5aa4-6d86-47d5-9454-918e1e5eddc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fdf214a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707950, 'reachable_time': 21616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 448288, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 podman[448167]: 2025-12-03 02:17:20.10424491 +0000 UTC m=+0.184596824 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 02:17:20 compute-0 podman[448174]: 2025-12-03 02:17:20.139876518 +0000 UTC m=+0.173968523 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible)
Dec  3 02:17:20 compute-0 podman[448169]: 2025-12-03 02:17:20.139865127 +0000 UTC m=+0.198158667 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.148 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e41fefe1-7fff-434a-92cd-55d2ed22558c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.224 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5918ea-2bea-47f4-989b-b2ad0097f2fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.225 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.225 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.226 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fdf214a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:20 compute-0 kernel: tap2fdf214a-00: entered promiscuous mode
Dec  3 02:17:20 compute-0 NetworkManager[48912]: <info>  [1764728240.2323] manager: (tap2fdf214a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.233 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.238 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fdf214a-00, col_values=(('external_ids', {'iface-id': 'c8314dfe-5b76-4819-9b3e-1cb76a272253'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.239 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:20 compute-0 ovn_controller[89134]: 2025-12-03T02:17:20Z|00120|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.241 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.254 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.254 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.256 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c1241da4-7650-47b4-8887-733bd1f60399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.257 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/2fdf214a-0f6e-4e5d-b449-e1988827937a.pid.haproxy
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID 2fdf214a-0f6e-4e5d-b449-e1988827937a
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:17:20 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:20.257 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'env', 'PROCESS_TAG=haproxy-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fdf214a-0f6e-4e5d-b449-e1988827937a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.313 351492 DEBUG nova.network.neutron [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.331 351492 INFO nova.compute.manager [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Took 2.34 seconds to deallocate network for instance.#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.378 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.379 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.427 351492 DEBUG nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.428 351492 DEBUG oslo_concurrency.lockutils [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.429 351492 DEBUG oslo_concurrency.lockutils [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.429 351492 DEBUG oslo_concurrency.lockutils [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.429 351492 DEBUG nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] No waiting events found dispatching network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.430 351492 WARNING nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received unexpected event network-vif-plugged-b7fa8023-e50c-4bea-be79-8fbe005f0b8a for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.430 351492 DEBUG nova.compute.manager [req-d2b419c3-790e-459c-9c5e-fbeff3d6fefa req-ef58cc70-bdf4-4753-aafa-bc72f13198e2 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Received event network-vif-deleted-b7fa8023-e50c-4bea-be79-8fbe005f0b8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.493 351492 DEBUG oslo_concurrency.processutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.755 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.756 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.757 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.757 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.759 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.760 351492 WARNING nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.762 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.762 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.763 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.766 351492 DEBUG oslo_concurrency.lockutils [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.767 351492 DEBUG nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.767 351492 WARNING nova.compute.manager [req-ea6435d1-d6c0-417d-b9cc-9c0cc6b18345 req-fb31be64-f438-4003-a3e8-c178e3124177 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.788 351492 DEBUG nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.789 351492 DEBUG nova.virt.libvirt.host [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Removed pending event for a48b4084-369d-432a-9f47-9378cdcc011f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.789 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728240.7871401, a48b4084-369d-432a-9f47-9378cdcc011f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.789 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.796231898 +0000 UTC m=+0.092795067 container create df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.798 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance rebooted successfully.#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.807 351492 DEBUG nova.compute.manager [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.814 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.824 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.849 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.850 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728240.7880769, a48b4084-369d-432a-9f47-9378cdcc011f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.850 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Started (Lifecycle Event)#033[00m
Dec  3 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.759734685 +0000 UTC m=+0.056297884 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:17:20 compute-0 systemd[1]: Started libpod-conmon-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95.scope.
Dec  3 02:17:20 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.879 351492 DEBUG oslo_concurrency.lockutils [None req-d743fc42-4c1c-4ed1-95a7-fb328cc57163 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.882 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:17:20 compute-0 nova_compute[351485]: 2025-12-03 02:17:20.888 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaead3289de756df5c362e51f445187494ce76bdc94cf33a7cf5eb23ba12419/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.940456198 +0000 UTC m=+0.237019407 container init df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 02:17:20 compute-0 podman[448388]: 2025-12-03 02:17:20.948660271 +0000 UTC m=+0.245223450 container start df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 02:17:20 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : New worker (448410) forked
Dec  3 02:17:20 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : Loading success.
Dec  3 02:17:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:17:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735971706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:17:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 211 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 24 KiB/s wr, 23 op/s
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.047 351492 DEBUG oslo_concurrency.processutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.057 351492 DEBUG nova.compute.provider_tree [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.096 351492 DEBUG nova.scheduler.client.report [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.131 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.140 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.141 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.166 351492 INFO nova.scheduler.client.report [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Deleted allocations for instance 4f50e501-f565-4e1f-aa02-df921702eff9#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.295 351492 DEBUG oslo_concurrency.lockutils [None req-d6c62894-6720-4b10-b1bb-7408d0e376bd 08c7d81f1f9e4989b1eb8b8cf96bbf11 a9efdda7cf984595a9c5a855bae62b0e - - default default] Lock "4f50e501-f565-4e1f-aa02-df921702eff9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.297 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1975 Content-Type: application/json Date: Wed, 03 Dec 2025 02:17:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7765a678-25c1-4978-ab04-cfd159ea1d96 x-openstack-request-id: req-7765a678-25c1-4978-ab04-cfd159ea1d96 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.298 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592", "name": "tempest-TestNetworkBasicOps-server-2141861820", "status": "ACTIVE", "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "user_id": "abdbefadac2a4d98bd33ed8a1a60ff75", "metadata": {}, "hostId": "4b1a91bac1182d0f1d9a1d34a268fb1305a907d06d3942a0b7e61f82", "image": {"id": "ef773cba-72f0-486f-b5e5-792ff26bb688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ef773cba-72f0-486f-b5e5-792ff26bb688"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:16:31Z", "updated": "2025-12-03T02:16:51Z", "addresses": {"tempest-network-smoke--628634883": [{"version": 4, "addr": "10.100.0.3", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ed:5c:3e"}, {"version": 4, "addr": "192.168.122.193", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ed:5c:3e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1925623369", "OS-SRV-USG:launched_at": "2025-12-03T02:16:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1550122294"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.298 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 used request id req-7765a678-25c1-4978-ab04-cfd159ea1d96 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.300 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'name': 'tempest-TestNetworkBasicOps-server-2141861820', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'user_id': 'abdbefadac2a4d98bd33ed8a1a60ff75', 'hostId': '4b1a91bac1182d0f1d9a1d34a268fb1305a907d06d3942a0b7e61f82', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:17:21.301357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592: ceilometer.compute.pollsters.NoVolumeException
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:17:21.330232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.335 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 / tapae5db7e6-7a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.335 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.337 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.338 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:17:21.336861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:17:21.337705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:17:21.338752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:17:21.339850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:17:21.340925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.355 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.355 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.356 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>]
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:17:21.356384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:17:21.357696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.395 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.395 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.396 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.latency volume: 1993141923 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.latency volume: 3865639 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.398 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.399 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:17:21.396621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:17:21.397939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.401 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.405 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:17:21.399316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.406 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:17:21.400663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:17:21.401332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:17:21.403013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:17:21.404123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/cpu volume: 28880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:17:21.405391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:17:21.406809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:17:21.407758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.409 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.410 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.411 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.compute.pollsters [-] 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:17:21.408732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:17:21.409421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:17:21.410415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:17:21.411523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-2141861820>]
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:17:21.412625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:17:21.413337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.415 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.416 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.417 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.418 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.419 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.419 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.419 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.426 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:17:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:17:21.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:17:21.414100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.461 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.461 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:21 compute-0 nova_compute[351485]: 2025-12-03 02:17:21.462 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:17:22 compute-0 ovn_controller[89134]: 2025-12-03T02:17:22Z|00121|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:17:22 compute-0 ovn_controller[89134]: 2025-12-03T02:17:22Z|00122|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.407 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.863 351492 DEBUG nova.compute.manager [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.864 351492 DEBUG oslo_concurrency.lockutils [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.866 351492 DEBUG oslo_concurrency.lockutils [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.866 351492 DEBUG oslo_concurrency.lockutils [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.867 351492 DEBUG nova.compute.manager [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:17:22 compute-0 nova_compute[351485]: 2025-12-03 02:17:22.868 351492 WARNING nova.compute.manager [req-93b71d7c-a626-443b-9048-ddaf11ffa714 req-6debb44b-4610-4ecd-aa0a-3707d5b36103 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state None.#033[00m
Dec  3 02:17:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 183 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 22 KiB/s wr, 34 op/s
Dec  3 02:17:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.607 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.895 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.933 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.934 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.934 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.935 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.936 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.937 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.938 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:17:23 compute-0 nova_compute[351485]: 2025-12-03 02:17:23.959 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:17:24 compute-0 nova_compute[351485]: 2025-12-03 02:17:24.599 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 23 KiB/s wr, 46 op/s
Dec  3 02:17:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 102 op/s
Dec  3 02:17:27 compute-0 nova_compute[351485]: 2025-12-03 02:17:27.390 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:27 compute-0 nova_compute[351485]: 2025-12-03 02:17:27.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.308121) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248308169, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 718, "num_deletes": 250, "total_data_size": 907293, "memory_usage": 921376, "flush_reason": "Manual Compaction"}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248318977, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 583360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38092, "largest_seqno": 38809, "table_properties": {"data_size": 580214, "index_size": 1054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8380, "raw_average_key_size": 20, "raw_value_size": 573620, "raw_average_value_size": 1405, "num_data_blocks": 47, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728189, "oldest_key_time": 1764728189, "file_creation_time": 1764728248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 10966 microseconds, and 6156 cpu microseconds.
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.319083) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 583360 bytes OK
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.319110) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.322037) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.322062) EVENT_LOG_v1 {"time_micros": 1764728248322054, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.322081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 903601, prev total WAL file size 903601, number of live WAL files 2.
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.324024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373535' seq:0, type:0; will stop at (end)
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(569KB)], [86(10MB)]
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248324139, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11221806, "oldest_snapshot_seqno": -1}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5730 keys, 8237751 bytes, temperature: kUnknown
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248389690, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 8237751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8201354, "index_size": 20991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 145593, "raw_average_key_size": 25, "raw_value_size": 8099645, "raw_average_value_size": 1413, "num_data_blocks": 864, "num_entries": 5730, "num_filter_entries": 5730, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.389942) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 8237751 bytes
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.392892) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.0 rd, 125.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.1 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(33.4) write-amplify(14.1) OK, records in: 6216, records dropped: 486 output_compression: NoCompression
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.392923) EVENT_LOG_v1 {"time_micros": 1764728248392910, "job": 50, "event": "compaction_finished", "compaction_time_micros": 65625, "compaction_time_cpu_micros": 38584, "output_level": 6, "num_output_files": 1, "total_output_size": 8237751, "num_input_records": 6216, "num_output_records": 5730, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248393268, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728248398019, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.323331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:17:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:17:28.398289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:17:28
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', '.rgw.root', 'volumes', 'backups', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec  3 02:17:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:17:28 compute-0 nova_compute[351485]: 2025-12-03 02:17:28.621 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:28 compute-0 ovn_controller[89134]: 2025-12-03T02:17:28Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:5c:3e 10.100.0.3
Dec  3 02:17:28 compute-0 ovn_controller[89134]: 2025-12-03T02:17:28Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:5c:3e 10.100.0.3
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 99 op/s
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:17:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:17:29 compute-0 ovn_controller[89134]: 2025-12-03T02:17:29Z|00123|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:17:29 compute-0 ovn_controller[89134]: 2025-12-03T02:17:29Z|00124|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:17:29 compute-0 nova_compute[351485]: 2025-12-03 02:17:29.370 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:29 compute-0 nova_compute[351485]: 2025-12-03 02:17:29.602 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:29 compute-0 podman[158098]: time="2025-12-03T02:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec  3 02:17:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9116 "" "Go-http-client/1.1"
Dec  3 02:17:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 204 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 141 op/s
Dec  3 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:17:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:17:31 compute-0 openstack_network_exporter[368278]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:17:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.994 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728236.9932077, 4f50e501-f565-4e1f-aa02-df921702eff9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:17:31 compute-0 nova_compute[351485]: 2025-12-03 02:17:31.994 351492 INFO nova.compute.manager [-] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:17:32 compute-0 nova_compute[351485]: 2025-12-03 02:17:32.020 351492 DEBUG nova.compute.manager [None req-df58e7e4-40b3-4b7f-bf52-2929f5e9c073 - - - - - -] [instance: 4f50e501-f565-4e1f-aa02-df921702eff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:17:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Dec  3 02:17:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:33 compute-0 nova_compute[351485]: 2025-12-03 02:17:33.594 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:33 compute-0 nova_compute[351485]: 2025-12-03 02:17:33.595 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:17:33 compute-0 nova_compute[351485]: 2025-12-03 02:17:33.620 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:34 compute-0 nova_compute[351485]: 2025-12-03 02:17:34.605 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:34 compute-0 nova_compute[351485]: 2025-12-03 02:17:34.845 351492 INFO nova.compute.manager [None req-a2cc65d1-4b4a-4903-8aa7-3a0a427ccfd3 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Get console output#033[00m
Dec  3 02:17:34 compute-0 nova_compute[351485]: 2025-12-03 02:17:34.858 351492 INFO oslo.privsep.daemon [None req-a2cc65d1-4b4a-4903-8aa7-3a0a427ccfd3 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp9a11x_tz/privsep.sock']#033[00m
Dec  3 02:17:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec  3 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:17:35 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cf8f9eb7-4f26-4adb-87f8-8a418de15bb6 does not exist
Dec  3 02:17:35 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fd37915f-c69b-40d2-90fc-f856f6355b57 does not exist
Dec  3 02:17:35 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 56f35786-796c-4444-b3d6-0658c4579382 does not exist
Dec  3 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.365 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:17:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:17:35 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:17:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.663 351492 INFO oslo.privsep.daemon [None req-a2cc65d1-4b4a-4903-8aa7-3a0a427ccfd3 abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.530 448603 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.535 448603 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.537 448603 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  3 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.538 448603 INFO oslo.privsep.daemon [-] privsep daemon running as pid 448603#033[00m
Dec  3 02:17:35 compute-0 nova_compute[351485]: 2025-12-03 02:17:35.764 448603 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  3 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.382131691 +0000 UTC m=+0.099673922 container create a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.329126871 +0000 UTC m=+0.046669142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:17:36 compute-0 systemd[1]: Started libpod-conmon-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope.
Dec  3 02:17:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.541719205 +0000 UTC m=+0.259261406 container init a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.562849203 +0000 UTC m=+0.280391384 container start a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.566928878 +0000 UTC m=+0.284471049 container attach a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:17:36 compute-0 stupefied_hugle[448713]: 167 167
Dec  3 02:17:36 compute-0 systemd[1]: libpod-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope: Deactivated successfully.
Dec  3 02:17:36 compute-0 conmon[448713]: conmon a92c2086b314d2aadbca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope/container/memory.events
Dec  3 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.580272986 +0000 UTC m=+0.297815237 container died a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 02:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c22174a1ec5d6b7758900d7977c7fe33619f26cdc87dc488b319f9f00ed2f97-merged.mount: Deactivated successfully.
Dec  3 02:17:36 compute-0 podman[448698]: 2025-12-03 02:17:36.658480238 +0000 UTC m=+0.376022459 container remove a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:17:36 compute-0 systemd[1]: libpod-conmon-a92c2086b314d2aadbca9e6eb4eccd70d28c6437055b3fce682d8000aa0e6a9a.scope: Deactivated successfully.
Dec  3 02:17:36 compute-0 podman[448736]: 2025-12-03 02:17:36.926682647 +0000 UTC m=+0.073920963 container create 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:17:36 compute-0 podman[448736]: 2025-12-03 02:17:36.900630979 +0000 UTC m=+0.047869365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:17:37 compute-0 systemd[1]: Started libpod-conmon-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope.
Dec  3 02:17:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec  3 02:17:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:37 compute-0 podman[448736]: 2025-12-03 02:17:37.119929624 +0000 UTC m=+0.267167970 container init 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:17:37 compute-0 podman[448736]: 2025-12-03 02:17:37.135273888 +0000 UTC m=+0.282512224 container start 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:17:37 compute-0 podman[448736]: 2025-12-03 02:17:37.142095551 +0000 UTC m=+0.289333877 container attach 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:17:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:38 compute-0 inspiring_taussig[448751]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:17:38 compute-0 inspiring_taussig[448751]: --> relative data size: 1.0
Dec  3 02:17:38 compute-0 inspiring_taussig[448751]: --> All data devices are unavailable
Dec  3 02:17:38 compute-0 systemd[1]: libpod-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope: Deactivated successfully.
Dec  3 02:17:38 compute-0 podman[448736]: 2025-12-03 02:17:38.434857817 +0000 UTC m=+1.582096163 container died 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:17:38 compute-0 systemd[1]: libpod-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope: Consumed 1.213s CPU time.
Dec  3 02:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-300713f384325cd839c1b11df1a0224378499f3b889b12439400f03fe3682c88-merged.mount: Deactivated successfully.
Dec  3 02:17:38 compute-0 podman[448736]: 2025-12-03 02:17:38.541807543 +0000 UTC m=+1.689045859 container remove 509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:17:38 compute-0 systemd[1]: libpod-conmon-509bd32fd28875c9d1b1f1cdffb36720ef32764a074d1a57b750fee8126f15ca.scope: Deactivated successfully.
Dec  3 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.594 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.627 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015194363726639317 of space, bias 1.0, pg target 0.4558309117991795 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:17:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.813 351492 DEBUG nova.compute.manager [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.814 351492 DEBUG nova.compute.manager [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing instance network info cache due to event network-changed-ae5db7e6-7a7a-4116-954a-be851ee02864. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.816 351492 DEBUG oslo_concurrency.lockutils [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.817 351492 DEBUG oslo_concurrency.lockutils [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:38 compute-0 nova_compute[351485]: 2025-12-03 02:17:38.818 351492 DEBUG nova.network.neutron [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Refreshing network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:17:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec  3 02:17:39 compute-0 nova_compute[351485]: 2025-12-03 02:17:39.430 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:39 compute-0 nova_compute[351485]: 2025-12-03 02:17:39.609 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:39 compute-0 podman[448929]: 2025-12-03 02:17:39.826360727 +0000 UTC m=+0.107144542 container create d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:17:39 compute-0 podman[448929]: 2025-12-03 02:17:39.784768631 +0000 UTC m=+0.065552446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:17:39 compute-0 systemd[1]: Started libpod-conmon-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope.
Dec  3 02:17:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:17:40 compute-0 podman[448929]: 2025-12-03 02:17:40.0010599 +0000 UTC m=+0.281843745 container init d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:17:40 compute-0 podman[448929]: 2025-12-03 02:17:40.014439299 +0000 UTC m=+0.295223124 container start d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:17:40 compute-0 podman[448929]: 2025-12-03 02:17:40.022189778 +0000 UTC m=+0.302973573 container attach d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:17:40 compute-0 systemd[1]: libpod-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope: Deactivated successfully.
Dec  3 02:17:40 compute-0 optimistic_ritchie[448946]: 167 167
Dec  3 02:17:40 compute-0 conmon[448946]: conmon d9435e63e8a86fe3e6e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope/container/memory.events
Dec  3 02:17:40 compute-0 podman[448945]: 2025-12-03 02:17:40.069004262 +0000 UTC m=+0.108023347 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 02:17:40 compute-0 podman[448949]: 2025-12-03 02:17:40.081169586 +0000 UTC m=+0.111059432 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:17:40 compute-0 podman[448982]: 2025-12-03 02:17:40.110496356 +0000 UTC m=+0.056835709 container died d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:17:40 compute-0 podman[448947]: 2025-12-03 02:17:40.128096263 +0000 UTC m=+0.161116498 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 02:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e99eadc854e482241e0c61b5d2ca21ac9c2c67952052807764979a8473bb825-merged.mount: Deactivated successfully.
Dec  3 02:17:40 compute-0 podman[448982]: 2025-12-03 02:17:40.166896561 +0000 UTC m=+0.113235894 container remove d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ritchie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:17:40 compute-0 systemd[1]: libpod-conmon-d9435e63e8a86fe3e6e32be003c9b6e125d7afc3af335ebb39e0e9e9560a298c.scope: Deactivated successfully.
Dec  3 02:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:17:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8594 writes, 38K keys, 8594 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 8594 writes, 8594 syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1360 writes, 6401 keys, 1360 commit groups, 1.0 writes per commit group, ingest: 8.72 MB, 0.01 MB/s#012Interval WAL: 1360 writes, 1360 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     98.4      0.49              0.22        25    0.020       0      0       0.0       0.0#012  L6      1/0    7.86 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8    136.5    111.6      1.63              0.79        24    0.068    122K    13K       0.0       0.0#012 Sum      1/0    7.86 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8    104.8    108.5      2.12              1.01        49    0.043    122K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    119.3    117.7      0.51              0.24        12    0.042     36K   3090       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    136.5    111.6      1.63              0.79        24    0.068    122K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.8      0.49              0.22        24    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 2.1 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 25.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000142 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1629,24.26 MB,7.8756%) FilterBlock(50,347.55 KB,0.110195%) IndexBlock(50,602.67 KB,0.191087%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.476370227 +0000 UTC m=+0.102520561 container create ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.429016317 +0000 UTC m=+0.055166661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:17:40 compute-0 systemd[1]: Started libpod-conmon-ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d.scope.
Dec  3 02:17:40 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.651656177 +0000 UTC m=+0.277806531 container init ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.661790723 +0000 UTC m=+0.287941057 container start ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 02:17:40 compute-0 podman[449027]: 2025-12-03 02:17:40.668713839 +0000 UTC m=+0.294864183 container attach ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 02:17:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]: {
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:    "0": [
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:        {
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "devices": [
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "/dev/loop3"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            ],
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_name": "ceph_lv0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_size": "21470642176",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "name": "ceph_lv0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "tags": {
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cluster_name": "ceph",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.crush_device_class": "",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.encrypted": "0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osd_id": "0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.type": "block",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.vdo": "0"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            },
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "type": "block",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "vg_name": "ceph_vg0"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:        }
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:    ],
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:    "1": [
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:        {
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "devices": [
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "/dev/loop4"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            ],
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_name": "ceph_lv1",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_size": "21470642176",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "name": "ceph_lv1",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "tags": {
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cluster_name": "ceph",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.crush_device_class": "",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.encrypted": "0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osd_id": "1",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.type": "block",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.vdo": "0"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            },
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "type": "block",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "vg_name": "ceph_vg1"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:        }
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:    ],
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:    "2": [
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:        {
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "devices": [
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "/dev/loop5"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            ],
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_name": "ceph_lv2",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_size": "21470642176",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "name": "ceph_lv2",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "tags": {
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.cluster_name": "ceph",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.crush_device_class": "",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.encrypted": "0",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osd_id": "2",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.type": "block",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:                "ceph.vdo": "0"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            },
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "type": "block",
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:            "vg_name": "ceph_vg2"
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:        }
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]:    ]
Dec  3 02:17:41 compute-0 stoic_driscoll[449043]: }
Dec  3 02:17:41 compute-0 systemd[1]: libpod-ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d.scope: Deactivated successfully.
Dec  3 02:17:41 compute-0 podman[449027]: 2025-12-03 02:17:41.582720759 +0000 UTC m=+1.208871093 container died ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f494aed7c7e573a7b6f4caff08d677b3d723287d6c619d590708721f8c972c-merged.mount: Deactivated successfully.
Dec  3 02:17:41 compute-0 podman[449027]: 2025-12-03 02:17:41.685060235 +0000 UTC m=+1.311210549 container remove ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:17:41 compute-0 systemd[1]: libpod-conmon-ef2c84c93c9ba1d5fd29b3ab4af0a1417980e989192e5313a58a7a39925e206d.scope: Deactivated successfully.
Dec  3 02:17:41 compute-0 nova_compute[351485]: 2025-12-03 02:17:41.916 351492 DEBUG nova.network.neutron [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updated VIF entry in instance network info cache for port ae5db7e6-7a7a-4116-954a-be851ee02864. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:17:41 compute-0 nova_compute[351485]: 2025-12-03 02:17:41.918 351492 DEBUG nova.network.neutron [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [{"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:41 compute-0 nova_compute[351485]: 2025-12-03 02:17:41.952 351492 DEBUG oslo_concurrency.lockutils [req-de21ddac-3b29-4310-8931-4d8c01f17e2e req-e2ee6120-c294-4ed2-82a1-61b4b62dff28 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.134 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.136 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:17:42 compute-0 nova_compute[351485]: 2025-12-03 02:17:42.139 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.147 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:17:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:42.148 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:17:42 compute-0 nova_compute[351485]: 2025-12-03 02:17:42.149 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:42 compute-0 podman[449204]: 2025-12-03 02:17:42.879265282 +0000 UTC m=+0.086749555 container create 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:17:42 compute-0 podman[449204]: 2025-12-03 02:17:42.844294643 +0000 UTC m=+0.051778956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:17:42 compute-0 systemd[1]: Started libpod-conmon-548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf.scope.
Dec  3 02:17:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.048190992 +0000 UTC m=+0.255675285 container init 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:17:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 653 KiB/s wr, 22 op/s
Dec  3 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.0654579 +0000 UTC m=+0.272942153 container start 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.071888462 +0000 UTC m=+0.279372785 container attach 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:17:43 compute-0 zealous_kare[449219]: 167 167
Dec  3 02:17:43 compute-0 systemd[1]: libpod-548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf.scope: Deactivated successfully.
Dec  3 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.079003994 +0000 UTC m=+0.286488247 container died 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 02:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e9224f9d4c209f7571da0b8f4467665f7bf40c0fe4cf79ba5bc99c02c0b1baa-merged.mount: Deactivated successfully.
Dec  3 02:17:43 compute-0 podman[449204]: 2025-12-03 02:17:43.176468651 +0000 UTC m=+0.383952904 container remove 548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kare, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:17:43 compute-0 systemd[1]: libpod-conmon-548c9e44629dc5f7a980465afb9ee9c97f10a44730ad55e85168fced38834bdf.scope: Deactivated successfully.
Dec  3 02:17:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.419635221 +0000 UTC m=+0.077567316 container create e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.388218942 +0000 UTC m=+0.046151067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:17:43 compute-0 systemd[1]: Started libpod-conmon-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope.
Dec  3 02:17:43 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.622887242 +0000 UTC m=+0.280819347 container init e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:17:43 compute-0 nova_compute[351485]: 2025-12-03 02:17:43.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.638845653 +0000 UTC m=+0.296777738 container start e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:17:43 compute-0 podman[449243]: 2025-12-03 02:17:43.645905122 +0000 UTC m=+0.303837217 container attach e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:17:44 compute-0 nova_compute[351485]: 2025-12-03 02:17:44.612 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]: {
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "osd_id": 2,
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "type": "bluestore"
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:    },
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "osd_id": 1,
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "type": "bluestore"
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:    },
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "osd_id": 0,
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:        "type": "bluestore"
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]:    }
Dec  3 02:17:44 compute-0 intelligent_mclaren[449259]: }
Dec  3 02:17:44 compute-0 systemd[1]: libpod-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope: Deactivated successfully.
Dec  3 02:17:44 compute-0 podman[449243]: 2025-12-03 02:17:44.893458769 +0000 UTC m=+1.551390894 container died e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:17:44 compute-0 systemd[1]: libpod-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope: Consumed 1.239s CPU time.
Dec  3 02:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-31bcb05dc56f8418364ded26ed9db6eaa41cf2bada1254709ab01f1c97fcad89-merged.mount: Deactivated successfully.
Dec  3 02:17:44 compute-0 podman[449243]: 2025-12-03 02:17:44.988832208 +0000 UTC m=+1.646764303 container remove e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 02:17:45 compute-0 systemd[1]: libpod-conmon-e6d28ba88fecc0c60884cfe894ef00b78f42313ca55628389509ec8c66128497.scope: Deactivated successfully.
Dec  3 02:17:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:17:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:17:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:17:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:17:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 06d157ad-47df-4f84-9e14-2b2fd428c222 does not exist
Dec  3 02:17:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d07edb2f-98f2-4d70-8ec3-fe6aaf2a20a2 does not exist
Dec  3 02:17:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec  3 02:17:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:17:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:17:46 compute-0 podman[449355]: 2025-12-03 02:17:46.8851591 +0000 UTC m=+0.137522562 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 02:17:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:17:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958934642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:17:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:17:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958934642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:17:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec  3 02:17:48 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:48.140 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:48 compute-0 nova_compute[351485]: 2025-12-03 02:17:48.632 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 3.5 KiB/s wr, 1 op/s
Dec  3 02:17:49 compute-0 nova_compute[351485]: 2025-12-03 02:17:49.443 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:49 compute-0 nova_compute[351485]: 2025-12-03 02:17:49.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:17:49 compute-0 nova_compute[351485]: 2025-12-03 02:17:49.615 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:49 compute-0 ovn_controller[89134]: 2025-12-03T02:17:49Z|00125|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:17:49 compute-0 ovn_controller[89134]: 2025-12-03T02:17:49Z|00126|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.064 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:50.153 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.734 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.734 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.757 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.843 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.843 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.857 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:17:50 compute-0 nova_compute[351485]: 2025-12-03 02:17:50.857 351492 INFO nova.compute.claims [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:17:50 compute-0 podman[449384]: 2025-12-03 02:17:50.885000196 +0000 UTC m=+0.114323904 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:17:50 compute-0 podman[449376]: 2025-12-03 02:17:50.900102353 +0000 UTC m=+0.143358836 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 02:17:50 compute-0 podman[449377]: 2025-12-03 02:17:50.901131413 +0000 UTC m=+0.137000617 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:17:50 compute-0 podman[449375]: 2025-12-03 02:17:50.90632829 +0000 UTC m=+0.161464779 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 02:17:50 compute-0 podman[449378]: 2025-12-03 02:17:50.916901329 +0000 UTC m=+0.152065833 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, name=ubi9, vendor=Red Hat, Inc.)
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.017 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 3.5 KiB/s wr, 1 op/s
Dec  3 02:17:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:17:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115272923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.545 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.560 351492 DEBUG nova.compute.provider_tree [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.579 351492 DEBUG nova.scheduler.client.report [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.604 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.606 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.663 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.663 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.691 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.709 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.827 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.829 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.830 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Creating image(s)#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.888 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:51 compute-0 nova_compute[351485]: 2025-12-03 02:17:51.967 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.032 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.047 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.081 351492 DEBUG nova.policy [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'abdbefadac2a4d98bd33ed8a1a60ff75', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.136 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.137 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.138 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.139 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.215 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.234 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.686 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:52 compute-0 nova_compute[351485]: 2025-12-03 02:17:52.816 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] resizing rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.035 351492 DEBUG nova.objects.instance [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'migration_context' on Instance uuid 1b83725c-0af2-491f-98d9-bdb0ed1a5979 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.060 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.061 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Ensure instance console log exists: /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:17:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.4 KiB/s wr, 0 op/s
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.063 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.064 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.065 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.254 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Successfully created port: 025b4c8a-b3c9-4114-95f7-f17506286d3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:17:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.484 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.485 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.508 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.601 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.601 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.611 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.611 351492 INFO nova.compute.claims [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.639 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:53 compute-0 nova_compute[351485]: 2025-12-03 02:17:53.844 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:17:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230111650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.408 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.420 351492 DEBUG nova.compute.provider_tree [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.441 351492 DEBUG nova.scheduler.client.report [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.474 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.476 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.509 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Successfully updated port: 025b4c8a-b3c9-4114-95f7-f17506286d3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.537 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.538 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.544 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.544 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquired lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.545 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.581 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.608 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.619 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.667 351492 DEBUG nova.compute.manager [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.668 351492 DEBUG nova.compute.manager [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing instance network info cache due to event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.668 351492 DEBUG oslo_concurrency.lockutils [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.728 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.729 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.732 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Creating image(s)#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.781 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.839 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.887 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.895 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.985 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.988 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.989 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:54 compute-0 nova_compute[351485]: 2025-12-03 02:17:54.990 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.025 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.033 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 241 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 626 KiB/s wr, 3 op/s
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.377 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.473 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.539 351492 DEBUG nova.policy [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '085bcee1002d425085c1f09d9b5d3d97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '19ab3b60e4c749c7897f20982829cd8c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.650 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] resizing rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.879 351492 DEBUG nova.objects.instance [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lazy-loading 'migration_context' on Instance uuid 40db12af-6ca8-4a4f-88e7-833c3fda87c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.906 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.906 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Ensure instance console log exists: /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.907 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.907 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:55 compute-0 nova_compute[351485]: 2025-12-03 02:17:55.908 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.523 351492 DEBUG nova.network.neutron [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.550 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Releasing lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.551 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance network_info: |[{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.552 351492 DEBUG oslo_concurrency.lockutils [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.552 351492 DEBUG nova.network.neutron [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.557 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start _get_guest_xml network_info=[{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.576 351492 WARNING nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.594 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.596 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.604 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.605 351492 DEBUG nova.virt.libvirt.host [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.605 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.606 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.607 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.607 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.608 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.608 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.609 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.609 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.610 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.610 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.611 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.611 351492 DEBUG nova.virt.hardware [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:17:56 compute-0 nova_compute[351485]: 2025-12-03 02:17:56.616 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 278 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.135 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Successfully created port: c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:17:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:17:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2353772158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.186 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.236 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.250 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:17:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368366315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.751 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.754 351492 DEBUG nova.virt.libvirt.vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-455653039',display_name='tempest-TestNetworkBasicOps-server-455653039',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-455653039',id=11,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGyLxdmoeScEfSkwzcCczvmCyzQ7WX6pYr3KymEzB5Q09G09n6d3TfahDx7L4JUEY5sh67bwZpAZn3mmGdgttDtWP8gJ/ON+rMTVTFtEqftauFytQHqZZbMU6xxCGBZ6yA==',key_name='tempest-TestNetworkBasicOps-378472767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-ux5cl6xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:51Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=1b83725c-0af2-491f-98d9-bdb0ed1a5979,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.755 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.756 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.758 351492 DEBUG nova.objects.instance [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b83725c-0af2-491f-98d9-bdb0ed1a5979 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.783 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <uuid>1b83725c-0af2-491f-98d9-bdb0ed1a5979</uuid>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <name>instance-0000000b</name>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <nova:name>tempest-TestNetworkBasicOps-server-455653039</nova:name>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:17:56</nova:creationTime>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:user uuid="abdbefadac2a4d98bd33ed8a1a60ff75">tempest-TestNetworkBasicOps-1039072813-project-member</nova:user>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:project uuid="f8f8e5d142604e8c8aabf1e14a1467ca">tempest-TestNetworkBasicOps-1039072813</nova:project>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <nova:port uuid="025b4c8a-b3c9-4114-95f7-f17506286d3e">
Dec  3 02:17:57 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <system>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <entry name="serial">1b83725c-0af2-491f-98d9-bdb0ed1a5979</entry>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <entry name="uuid">1b83725c-0af2-491f-98d9-bdb0ed1a5979</entry>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </system>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <os>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  </os>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <features>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  </features>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk">
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      </source>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config">
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      </source>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:17:57 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:24:c0:50"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <target dev="tap025b4c8a-b3"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/console.log" append="off"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <video>
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </video>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:17:57 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:17:57 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:17:57 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:17:57 compute-0 nova_compute[351485]: </domain>
Dec  3 02:17:57 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.784 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Preparing to wait for external event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.785 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.785 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.786 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.787 351492 DEBUG nova.virt.libvirt.vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-455653039',display_name='tempest-TestNetworkBasicOps-server-455653039',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-455653039',id=11,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGyLxdmoeScEfSkwzcCczvmCyzQ7WX6pYr3KymEzB5Q09G09n6d3TfahDx7L4JUEY5sh67bwZpAZn3mmGdgttDtWP8gJ/ON+rMTVTFtEqftauFytQHqZZbMU6xxCGBZ6yA==',key_name='tempest-TestNetworkBasicOps-378472767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-ux5cl6xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:51Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=1b83725c-0af2-491f-98d9-bdb0ed1a5979,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.787 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.788 351492 DEBUG nova.network.os_vif_util [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.789 351492 DEBUG os_vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.791 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.792 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.792 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.798 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap025b4c8a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.799 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap025b4c8a-b3, col_values=(('external_ids', {'iface-id': '025b4c8a-b3c9-4114-95f7-f17506286d3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:c0:50', 'vm-uuid': '1b83725c-0af2-491f-98d9-bdb0ed1a5979'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:57 compute-0 NetworkManager[48912]: <info>  [1764728277.8037] manager: (tap025b4c8a-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.801 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.807 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.812 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.813 351492 INFO os_vif [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3')#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.902 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.903 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.903 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] No VIF found with MAC fa:16:3e:24:c0:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.905 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Using config drive#033[00m
Dec  3 02:17:57 compute-0 nova_compute[351485]: 2025-12-03 02:17:57.966 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.429 351492 DEBUG nova.network.neutron [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updated VIF entry in instance network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.430 351492 DEBUG nova.network.neutron [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.453 351492 DEBUG oslo_concurrency.lockutils [req-57d85ee9-1df5-4843-ab4b-af62de530db1 req-44962912-4a3b-46de-a9f4-7e0dcac1f89e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:17:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.558 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.619 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Creating config drive at /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.626 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9fj184j4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.660 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.704 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Successfully updated port: c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.731 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.731 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquired lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.732 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.779 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9fj184j4" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.844 351492 DEBUG nova.storage.rbd_utils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] rbd image 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:17:58 compute-0 nova_compute[351485]: 2025-12-03 02:17:58.854 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:17:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 278 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.113 351492 DEBUG nova.compute.manager [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-changed-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.114 351492 DEBUG nova.compute.manager [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Refreshing instance network info cache due to event network-changed-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.114 351492 DEBUG oslo_concurrency.lockutils [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.165 351492 DEBUG oslo_concurrency.processutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config 1b83725c-0af2-491f-98d9-bdb0ed1a5979_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.166 351492 INFO nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deleting local config drive /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979/disk.config because it was imported into RBD.#033[00m
Dec  3 02:17:59 compute-0 kernel: tap025b4c8a-b3: entered promiscuous mode
Dec  3 02:17:59 compute-0 NetworkManager[48912]: <info>  [1764728279.2936] manager: (tap025b4c8a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Dec  3 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00127|binding|INFO|Claiming lport 025b4c8a-b3c9-4114-95f7-f17506286d3e for this chassis.
Dec  3 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00128|binding|INFO|025b4c8a-b3c9-4114-95f7-f17506286d3e: Claiming fa:16:3e:24:c0:50 10.100.0.14
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.299 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00129|binding|INFO|Setting lport 025b4c8a-b3c9-4114-95f7-f17506286d3e ovn-installed in OVS
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.331 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.337 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:59 compute-0 systemd-udevd[449985]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:17:59 compute-0 NetworkManager[48912]: <info>  [1764728279.3892] device (tap025b4c8a-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:17:59 compute-0 NetworkManager[48912]: <info>  [1764728279.3923] device (tap025b4c8a-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:17:59 compute-0 systemd-machined[138558]: New machine qemu-12-instance-0000000b.
Dec  3 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00130|binding|INFO|Setting lport 025b4c8a-b3c9-4114-95f7-f17506286d3e up in Southbound
Dec  3 02:17:59 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.418 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:c0:50 10.100.0.14'], port_security=['fa:16:3e:24:c0:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b83725c-0af2-491f-98d9-bdb0ed1a5979', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0897a5e4-2e8b-4479-bdb4-a75dc9f6f9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=025b4c8a-b3c9-4114-95f7-f17506286d3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.420 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 025b4c8a-b3c9-4114-95f7-f17506286d3e in datapath ed008f09-da46-4507-9be2-7398a4728121 bound to our chassis#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.425 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed008f09-da46-4507-9be2-7398a4728121#033[00m
Dec  3 02:17:59 compute-0 ovn_controller[89134]: 2025-12-03T02:17:59Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:dd:2f 10.100.0.9
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.452 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[52d075b4-e2be-486c-a6a8-437d203cd16e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.497 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[7c52fd0b-2b82-45e5-a89c-266e04374d83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.501 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c47b3c-49a3-4dd2-a0e2-2296f04202fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.510 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.539 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[34b505b0-4eca-462e-8424-77e4eb9bb875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.560 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0417e203-13fa-44c2-8051-3a643da5e7e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450002, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.581 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[8efb6bf1-2474-48d8-b4d0-a00251749269]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704225, 'tstamp': 704225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450003, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704229, 'tstamp': 704229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450003, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.584 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.586 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:59 compute-0 nova_compute[351485]: 2025-12-03 02:17:59.588 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.593 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped008f09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.594 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.595 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped008f09-d0, col_values=(('external_ids', {'iface-id': '4fe53946-9a81-46d3-946d-3676da417bd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.595 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.649 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.650 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:17:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:17:59.651 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:17:59 compute-0 podman[158098]: time="2025-12-03T02:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec  3 02:17:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9124 "" "Go-http-client/1.1"
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.373 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728280.3726099, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.374 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Started (Lifecycle Event)#033[00m
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.401 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.411 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728280.3728306, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.412 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.431 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.441 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:18:00 compute-0 nova_compute[351485]: 2025-12-03 02:18:00.462 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.318 351492 DEBUG nova.network.neutron [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updating instance_info_cache with network_info: [{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.349 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Releasing lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.350 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance network_info: |[{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.350 351492 DEBUG oslo_concurrency.lockutils [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.350 351492 DEBUG nova.network.neutron [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Refreshing network info cache for port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.354 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start _get_guest_xml network_info=[{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.364 351492 WARNING nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.379 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.380 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.388 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.389 351492 DEBUG nova.virt.libvirt.host [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.389 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.390 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.390 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.391 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.391 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.392 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.392 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.392 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.393 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.393 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.393 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.394 351492 DEBUG nova.virt.hardware [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.399 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:18:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:18:01 compute-0 openstack_network_exporter[368278]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:18:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:18:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:18:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913447852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.928 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.966 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:18:01 compute-0 nova_compute[351485]: 2025-12-03 02:18:01.981 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:18:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437113168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.519 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.522 351492 DEBUG nova.virt.libvirt.vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-143016714',display_name='tempest-ServerAddressesTestJSON-server-143016714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-143016714',id=12,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19ab3b60e4c749c7897f20982829cd8c',ramdisk_id='',reservation_id='r-qlc2ubob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2068212470',owner_user_name='tempest-ServerAddressesTestJSON-2068212470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:54Z,user_data=None,user_id='085bcee1002d425085c1f09d9b5d3d97',uuid=40db12af-6ca8-4a4f-88e7-833c3fda87c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.523 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converting VIF {"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.524 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.527 351492 DEBUG nova.objects.instance [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lazy-loading 'pci_devices' on Instance uuid 40db12af-6ca8-4a4f-88e7-833c3fda87c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.548 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <uuid>40db12af-6ca8-4a4f-88e7-833c3fda87c9</uuid>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <name>instance-0000000c</name>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <nova:name>tempest-ServerAddressesTestJSON-server-143016714</nova:name>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:18:01</nova:creationTime>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:user uuid="085bcee1002d425085c1f09d9b5d3d97">tempest-ServerAddressesTestJSON-2068212470-project-member</nova:user>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:project uuid="19ab3b60e4c749c7897f20982829cd8c">tempest-ServerAddressesTestJSON-2068212470</nova:project>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <nova:port uuid="c6f07ea7-978a-46d9-b7f8-a4c14ac8475f">
Dec  3 02:18:02 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <system>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <entry name="serial">40db12af-6ca8-4a4f-88e7-833c3fda87c9</entry>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <entry name="uuid">40db12af-6ca8-4a4f-88e7-833c3fda87c9</entry>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </system>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <os>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  </os>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <features>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  </features>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk">
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      </source>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config">
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      </source>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:18:02 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:0d:93:5c"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <target dev="tapc6f07ea7-97"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/console.log" append="off"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <video>
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </video>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:18:02 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:18:02 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:18:02 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:18:02 compute-0 nova_compute[351485]: </domain>
Dec  3 02:18:02 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.550 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Preparing to wait for external event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.550 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.551 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.551 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.553 351492 DEBUG nova.virt.libvirt.vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-143016714',display_name='tempest-ServerAddressesTestJSON-server-143016714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-143016714',id=12,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19ab3b60e4c749c7897f20982829cd8c',ramdisk_id='',reservation_id='r-qlc2ubob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-2068212470',owner_user_name='tempest-ServerAddressesTestJSON-2068212470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:17:54Z,user_data=None,user_id='085bcee1002d425085c1f09d9b5d3d97',uuid=40db12af-6ca8-4a4f-88e7-833c3fda87c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.553 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converting VIF {"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.555 351492 DEBUG nova.network.os_vif_util [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.556 351492 DEBUG os_vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.558 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.559 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.565 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.566 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6f07ea7-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.567 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6f07ea7-97, col_values=(('external_ids', {'iface-id': 'c6f07ea7-978a-46d9-b7f8-a4c14ac8475f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:93:5c', 'vm-uuid': '40db12af-6ca8-4a4f-88e7-833c3fda87c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:02 compute-0 NetworkManager[48912]: <info>  [1764728282.5719] manager: (tapc6f07ea7-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.571 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.576 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.584 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.586 351492 INFO os_vif [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97')#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.669 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.670 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.671 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] No VIF found with MAC fa:16:3e:0d:93:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.672 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Using config drive#033[00m
Dec  3 02:18:02 compute-0 nova_compute[351485]: 2025-12-03 02:18:02.724 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:18:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 574 KiB/s rd, 3.6 MiB/s wr, 107 op/s
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.290961) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283290997, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 577, "num_deletes": 251, "total_data_size": 581677, "memory_usage": 593448, "flush_reason": "Manual Compaction"}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283299696, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 576035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38810, "largest_seqno": 39386, "table_properties": {"data_size": 572864, "index_size": 1079, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7488, "raw_average_key_size": 19, "raw_value_size": 566527, "raw_average_value_size": 1463, "num_data_blocks": 48, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728248, "oldest_key_time": 1764728248, "file_creation_time": 1764728283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 8813 microseconds, and 4214 cpu microseconds.
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.299761) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 576035 bytes OK
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.299793) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.302608) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.302631) EVENT_LOG_v1 {"time_micros": 1764728283302623, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.302653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 578467, prev total WAL file size 578467, number of live WAL files 2.
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.303676) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(562KB)], [89(8044KB)]
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283303775, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 8813786, "oldest_snapshot_seqno": -1}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5603 keys, 7077159 bytes, temperature: kUnknown
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283369111, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 7077159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7042827, "index_size": 19246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 143654, "raw_average_key_size": 25, "raw_value_size": 6944505, "raw_average_value_size": 1239, "num_data_blocks": 783, "num_entries": 5603, "num_filter_entries": 5603, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.371 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Creating config drive at /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config#033[00m
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.369838) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 7077159 bytes
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.373327) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 107.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(27.6) write-amplify(12.3) OK, records in: 6117, records dropped: 514 output_compression: NoCompression
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.373360) EVENT_LOG_v1 {"time_micros": 1764728283373345, "job": 52, "event": "compaction_finished", "compaction_time_micros": 65827, "compaction_time_cpu_micros": 41835, "output_level": 6, "num_output_files": 1, "total_output_size": 7077159, "num_input_records": 6117, "num_output_records": 5603, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283373837, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728283377010, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.303346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:03.377219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.380 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz7fd73a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.434 351492 DEBUG nova.network.neutron [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updated VIF entry in instance network info cache for port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.436 351492 DEBUG nova.network.neutron [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updating instance_info_cache with network_info: [{"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.460 351492 DEBUG oslo_concurrency.lockutils [req-8d4d023c-75e2-41d3-ad98-4727e47deee6 req-16a35fce-f5d4-4050-a6cd-b07a47cfd7e7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-40db12af-6ca8-4a4f-88e7-833c3fda87c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.530 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz7fd73a" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.602 351492 DEBUG nova.storage.rbd_utils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] rbd image 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.618 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.650 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.936 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.937 351492 DEBUG oslo_concurrency.processutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config 40db12af-6ca8-4a4f-88e7-833c3fda87c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:03 compute-0 nova_compute[351485]: 2025-12-03 02:18:03.937 351492 INFO nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deleting local config drive /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9/disk.config because it was imported into RBD.#033[00m
Dec  3 02:18:04 compute-0 kernel: tapc6f07ea7-97: entered promiscuous mode
Dec  3 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.0361] manager: (tapc6f07ea7-97): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Dec  3 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00131|binding|INFO|Claiming lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f for this chassis.
Dec  3 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00132|binding|INFO|c6f07ea7-978a-46d9-b7f8-a4c14ac8475f: Claiming fa:16:3e:0d:93:5c 10.100.0.6
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.039 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.058 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.061 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 bound to our chassis#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.067 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dee48a2c-2a7a-4864-9bd2-f42030910aa8#033[00m
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.079 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00133|binding|INFO|Setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f ovn-installed in OVS
Dec  3 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00134|binding|INFO|Setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f up in Southbound
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.081 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.088 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[aac84330-10da-4653-b52f-2c460a8c6fa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.092 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdee48a2c-21 in ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.094 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdee48a2c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.094 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[569c3531-84c0-4a18-b0e6-7753c81b3df8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.096 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4bc2b0-45c5-428a-984e-d7bf7be6e818]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 systemd-udevd[450184]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.111 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0d1696-902e-4636-bbc5-a487572e6c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 systemd-machined[138558]: New machine qemu-13-instance-0000000c.
Dec  3 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.1261] device (tapc6f07ea7-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.1274] device (tapc6f07ea7-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:18:04 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.139 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[72ee20af-f1c7-42f6-ada7-1d2c8c06533c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.183 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6e1faa-12c9-4238-9fd7-9fb0fe7948f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.1956] manager: (tapdee48a2c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.194 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3707b1ff-7eab-4246-828a-6d07df96dd9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.229 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[0c98e99e-aed3-469f-80e2-cf26fa52c222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.233 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[d22e5a0d-c6d2-443d-bac2-dc5407fb46f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.2647] device (tapdee48a2c-20): carrier: link connected
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.272 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fbeca5-06a0-4d52-b78f-f57fc777d3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.295 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[05d4ad64-4c1b-4745-a5e0-d10a0090b46c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdee48a2c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712374, 'reachable_time': 34062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450215, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.332 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[680cfad4-eda7-4462-8cd3-b02dc5169a35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:20e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712374, 'tstamp': 712374}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450216, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.362 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4edee53e-cb76-44c5-8d69-e47a66dfd46e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdee48a2c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712374, 'reachable_time': 34062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450217, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.414 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cd261bc1-6628-447c-9fd8-edc3abb49c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.535 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ecf449-1293-4050-9c47-55434a60750e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.538 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdee48a2c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.538 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.539 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdee48a2c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:04 compute-0 kernel: tapdee48a2c-20: entered promiscuous mode
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.542 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:04 compute-0 NetworkManager[48912]: <info>  [1764728284.5440] manager: (tapdee48a2c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.553 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.564 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdee48a2c-20, col_values=(('external_ids', {'iface-id': '01cdcf90-ecf3-431a-911c-1a03d9741df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.566 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:04 compute-0 ovn_controller[89134]: 2025-12-03T02:18:04Z|00135|binding|INFO|Releasing lport 01cdcf90-ecf3-431a-911c-1a03d9741df1 from this chassis (sb_readonly=0)
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.569 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.570 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dee48a2c-2a7a-4864-9bd2-f42030910aa8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dee48a2c-2a7a-4864-9bd2-f42030910aa8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.572 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b5dea5d1-7cc6-42ae-b6d3-e5d9cb6e5c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.573 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-dee48a2c-2a7a-4864-9bd2-f42030910aa8
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/dee48a2c-2a7a-4864-9bd2-f42030910aa8.pid.haproxy
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID dee48a2c-2a7a-4864-9bd2-f42030910aa8
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:18:04 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:04.574 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'env', 'PROCESS_TAG=haproxy-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dee48a2c-2a7a-4864-9bd2-f42030910aa8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:18:04 compute-0 nova_compute[351485]: 2025-12-03 02:18:04.606 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 308 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 576 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec  3 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.181848975 +0000 UTC m=+0.125510921 container create 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.129836224 +0000 UTC m=+0.073498250 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:18:05 compute-0 systemd[1]: Started libpod-conmon-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope.
Dec  3 02:18:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2458ccabf09d938af728de35a5600b8e9250e78dcce1ee129f34e94e9a713cdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.303897038 +0000 UTC m=+0.247559045 container init 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:18:05 compute-0 podman[450247]: 2025-12-03 02:18:05.314806627 +0000 UTC m=+0.258468603 container start 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 02:18:05 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : New worker (450311) forked
Dec  3 02:18:05 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : Loading success.
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.397 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728285.397048, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.397 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Started (Lifecycle Event)#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.429 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.436 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728285.397138, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.437 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.463 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.470 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.495 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:18:05 compute-0 nova_compute[351485]: 2025-12-03 02:18:05.670 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 577 KiB/s rd, 3.0 MiB/s wr, 111 op/s
Dec  3 02:18:07 compute-0 nova_compute[351485]: 2025-12-03 02:18:07.572 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.062 351492 DEBUG nova.compute.manager [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.065 351492 DEBUG oslo_concurrency.lockutils [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.066 351492 DEBUG oslo_concurrency.lockutils [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.067 351492 DEBUG oslo_concurrency.lockutils [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.067 351492 DEBUG nova.compute.manager [req-51f164ed-9202-42d3-940e-acb8dfad9531 req-15dfc5ce-984b-431a-b3ae-e0aaa53747f8 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Processing event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.068 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.076 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728288.0760784, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.077 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.081 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.091 351492 INFO nova.virt.libvirt.driver [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance spawned successfully.#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.092 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.111 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.125 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.132 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.133 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.134 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.135 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.136 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.137 351492 DEBUG nova.virt.libvirt.driver [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.160 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.195 351492 INFO nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 16.37 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.195 351492 DEBUG nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.286 351492 INFO nova.compute.manager [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 17.48 seconds to build instance.#033[00m
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.300 351492 DEBUG oslo_concurrency.lockutils [None req-b756af65-5caf-4da5-9136-c2c3aa06036d abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:08 compute-0 nova_compute[351485]: 2025-12-03 02:18:08.649 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 547 KiB/s rd, 898 KiB/s wr, 72 op/s
Dec  3 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.491 351492 DEBUG nova.compute.manager [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.492 351492 DEBUG oslo_concurrency.lockutils [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.493 351492 DEBUG oslo_concurrency.lockutils [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.494 351492 DEBUG oslo_concurrency.lockutils [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.495 351492 DEBUG nova.compute.manager [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] No waiting events found dispatching network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.496 351492 WARNING nova.compute.manager [req-42e85dee-271f-433e-a625-9ce629e5c950 req-93a456e0-8294-4dab-9799-afe0b0ddc13e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received unexpected event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e for instance with vm_state active and task_state None.#033[00m
Dec  3 02:18:10 compute-0 ovn_controller[89134]: 2025-12-03T02:18:10Z|00136|binding|INFO|Releasing lport 01cdcf90-ecf3-431a-911c-1a03d9741df1 from this chassis (sb_readonly=0)
Dec  3 02:18:10 compute-0 ovn_controller[89134]: 2025-12-03T02:18:10Z|00137|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:18:10 compute-0 ovn_controller[89134]: 2025-12-03T02:18:10Z|00138|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:18:10 compute-0 nova_compute[351485]: 2025-12-03 02:18:10.816 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:10 compute-0 podman[450322]: 2025-12-03 02:18:10.868331604 +0000 UTC m=+0.101531163 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:18:10 compute-0 podman[450320]: 2025-12-03 02:18:10.869129837 +0000 UTC m=+0.111171127 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:18:10 compute-0 podman[450321]: 2025-12-03 02:18:10.876706421 +0000 UTC m=+0.110097596 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2)
Dec  3 02:18:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 631 KiB/s rd, 900 KiB/s wr, 79 op/s
Dec  3 02:18:12 compute-0 nova_compute[351485]: 2025-12-03 02:18:12.577 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:12 compute-0 nova_compute[351485]: 2025-12-03 02:18:12.617 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.022 351492 DEBUG nova.compute.manager [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.024 351492 DEBUG nova.compute.manager [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing instance network info cache due to event network-changed-025b4c8a-b3c9-4114-95f7-f17506286d3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.025 351492 DEBUG oslo_concurrency.lockutils [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.026 351492 DEBUG oslo_concurrency.lockutils [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.027 351492 DEBUG nova.network.neutron [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Refreshing network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:18:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 49 KiB/s wr, 62 op/s
Dec  3 02:18:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:13 compute-0 nova_compute[351485]: 2025-12-03 02:18:13.653 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 26 KiB/s wr, 53 op/s
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.843 351492 DEBUG nova.compute.manager [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.844 351492 DEBUG oslo_concurrency.lockutils [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.844 351492 DEBUG oslo_concurrency.lockutils [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.845 351492 DEBUG oslo_concurrency.lockutils [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.845 351492 DEBUG nova.compute.manager [req-fab4b438-6636-44a6-acdc-4a10cf8bcfdd req-da5c3950-286c-4eca-8c09-6f33bd6a3b45 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Processing event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.846 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.862 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728295.8609743, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.863 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.866 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.874 351492 INFO nova.virt.libvirt.driver [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance spawned successfully.#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.875 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.889 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.908 351492 DEBUG nova.network.neutron [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updated VIF entry in instance network info cache for port 025b4c8a-b3c9-4114-95f7-f17506286d3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.909 351492 DEBUG nova.network.neutron [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [{"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.911 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.925 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.926 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.926 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.927 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.928 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.928 351492 DEBUG nova.virt.libvirt.driver [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.939 351492 DEBUG oslo_concurrency.lockutils [req-5dff171c-8e88-4985-8c60-82de48d4d5c3 req-58247300-a9cc-41f8-b4c8-b76b0e123b8c 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-1b83725c-0af2-491f-98d9-bdb0ed1a5979" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.940 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.982 351492 INFO nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 21.25 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:18:15 compute-0 nova_compute[351485]: 2025-12-03 02:18:15.983 351492 DEBUG nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:16 compute-0 nova_compute[351485]: 2025-12-03 02:18:16.059 351492 INFO nova.compute.manager [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 22.50 seconds to build instance.#033[00m
Dec  3 02:18:16 compute-0 nova_compute[351485]: 2025-12-03 02:18:16.075 351492 DEBUG oslo_concurrency.lockutils [None req-8ad6496d-d21c-41b4-85ac-d99c1a96054f 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 73 op/s
Dec  3 02:18:17 compute-0 ovn_controller[89134]: 2025-12-03T02:18:17Z|00139|binding|INFO|Releasing lport 01cdcf90-ecf3-431a-911c-1a03d9741df1 from this chassis (sb_readonly=0)
Dec  3 02:18:17 compute-0 ovn_controller[89134]: 2025-12-03T02:18:17Z|00140|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:18:17 compute-0 ovn_controller[89134]: 2025-12-03T02:18:17Z|00141|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.582 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:17 compute-0 podman[450377]: 2025-12-03 02:18:17.900143484 +0000 UTC m=+0.148524343 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.919 351492 DEBUG nova.compute.manager [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.920 351492 DEBUG oslo_concurrency.lockutils [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.920 351492 DEBUG oslo_concurrency.lockutils [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.921 351492 DEBUG oslo_concurrency.lockutils [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.921 351492 DEBUG nova.compute.manager [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] No waiting events found dispatching network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:17 compute-0 nova_compute[351485]: 2025-12-03 02:18:17.922 351492 WARNING nova.compute.manager [req-ffb14a95-272f-4346-9e84-20fafc8cb9cf req-3b786a19-0510-4469-af8f-034fdd3eaf06 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received unexpected event network-vif-plugged-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f for instance with vm_state active and task_state None.#033[00m
Dec  3 02:18:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.620 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:18 compute-0 nova_compute[351485]: 2025-12-03 02:18:18.875 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 310 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 68 op/s
Dec  3 02:18:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:18:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974627673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.211 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.212 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.214 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.215 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.217 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.221 351492 INFO nova.compute.manager [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Terminating instance#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.224 351492 DEBUG nova.compute.manager [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.237 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:19 compute-0 kernel: tapc6f07ea7-97 (unregistering): left promiscuous mode
Dec  3 02:18:19 compute-0 NetworkManager[48912]: <info>  [1764728299.3297] device (tapc6f07ea7-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00142|binding|INFO|Releasing lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f from this chassis (sb_readonly=0)
Dec  3 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00143|binding|INFO|Setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f down in Southbound
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.345 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00144|binding|INFO|Removing iface tapc6f07ea7-97 ovn-installed in OVS
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.359 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.360 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.361 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 unbound from our chassis#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.363 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dee48a2c-2a7a-4864-9bd2-f42030910aa8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.369 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f6027f1c-3838-4cd3-b49d-a9cda4e40d8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.370 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 namespace which is not needed anymore#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.382 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  3 02:18:19 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 4.986s CPU time.
Dec  3 02:18:19 compute-0 systemd-machined[138558]: Machine qemu-13-instance-0000000c terminated.
Dec  3 02:18:19 compute-0 kernel: tapc6f07ea7-97: entered promiscuous mode
Dec  3 02:18:19 compute-0 NetworkManager[48912]: <info>  [1764728299.4499] manager: (tapc6f07ea7-97): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Dec  3 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00145|binding|INFO|Claiming lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f for this chassis.
Dec  3 02:18:19 compute-0 kernel: tapc6f07ea7-97 (unregistering): left promiscuous mode
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.453 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00146|binding|INFO|c6f07ea7-978a-46d9-b7f8-a4c14ac8475f: Claiming fa:16:3e:0d:93:5c 10.100.0.6
Dec  3 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00147|if_status|INFO|Not setting lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f down as sb is readonly
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.471 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.477 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:19 compute-0 ovn_controller[89134]: 2025-12-03T02:18:19Z|00148|binding|INFO|Releasing lport c6f07ea7-978a-46d9-b7f8-a4c14ac8475f from this chassis (sb_readonly=0)
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.481 351492 INFO nova.virt.libvirt.driver [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Instance destroyed successfully.#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.482 351492 DEBUG nova.objects.instance [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lazy-loading 'resources' on Instance uuid 40db12af-6ca8-4a4f-88e7-833c3fda87c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.484 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:93:5c 10.100.0.6'], port_security=['fa:16:3e:0d:93:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '40db12af-6ca8-4a4f-88e7-833c3fda87c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19ab3b60e4c749c7897f20982829cd8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8422e37d-61b1-4fef-9439-a6ea41458932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf2713d8-67bb-4af5-af36-8021ea746eae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.498 351492 DEBUG nova.virt.libvirt.vif [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-143016714',display_name='tempest-ServerAddressesTestJSON-server-143016714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-143016714',id=12,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19ab3b60e4c749c7897f20982829cd8c',ramdisk_id='',reservation_id='r-qlc2ubob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-2068212470',owner_user_name='tempest-ServerAddressesTestJSON-2068212470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:18:16Z,user_data=None,user_id='085bcee1002d425085c1f09d9b5d3d97',uuid=40db12af-6ca8-4a4f-88e7-833c3fda87c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.500 351492 DEBUG nova.network.os_vif_util [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converting VIF {"id": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "address": "fa:16:3e:0d:93:5c", "network": {"id": "dee48a2c-2a7a-4864-9bd2-f42030910aa8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1676161980-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "19ab3b60e4c749c7897f20982829cd8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f07ea7-97", "ovs_interfaceid": "c6f07ea7-978a-46d9-b7f8-a4c14ac8475f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.501 351492 DEBUG nova.network.os_vif_util [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.501 351492 DEBUG os_vif [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.504 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.504 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6f07ea7-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.507 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.510 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.513 351492 INFO os_vif [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=c6f07ea7-978a-46d9-b7f8-a4c14ac8475f,network=Network(dee48a2c-2a7a-4864-9bd2-f42030910aa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f07ea7-97')#033[00m
Dec  3 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : haproxy version is 2.8.14-c23fe91
Dec  3 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [NOTICE]   (450308) : path to executable is /usr/sbin/haproxy
Dec  3 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [WARNING]  (450308) : Exiting Master process...
Dec  3 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [ALERT]    (450308) : Current worker (450311) exited with code 143 (Terminated)
Dec  3 02:18:19 compute-0 neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8[450299]: [WARNING]  (450308) : All workers exited. Exiting... (0)
Dec  3 02:18:19 compute-0 systemd[1]: libpod-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope: Deactivated successfully.
Dec  3 02:18:19 compute-0 conmon[450299]: conmon 1de37f14aa8d52a7f5b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope/container/memory.events
Dec  3 02:18:19 compute-0 podman[450451]: 2025-12-03 02:18:19.587350949 +0000 UTC m=+0.078833771 container died 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.626 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.627 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.634 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.635 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c-userdata-shm.mount: Deactivated successfully.
Dec  3 02:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2458ccabf09d938af728de35a5600b8e9250e78dcce1ee129f34e94e9a713cdc-merged.mount: Deactivated successfully.
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.643 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.644 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.653 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.654 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:18:19 compute-0 podman[450451]: 2025-12-03 02:18:19.658246395 +0000 UTC m=+0.149729217 container cleanup 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:18:19 compute-0 systemd[1]: libpod-conmon-1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c.scope: Deactivated successfully.
Dec  3 02:18:19 compute-0 podman[450494]: 2025-12-03 02:18:19.762273638 +0000 UTC m=+0.068530660 container remove 1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.772 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[99964c61-3d5f-4774-a0e1-ab6c775eae52]: (4, ('Wed Dec  3 02:18:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 (1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c)\n1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c\nWed Dec  3 02:18:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 (1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c)\n1de37f14aa8d52a7f5b474ddf624a198b96826ecd0cf26d4d2ead1dbc6e51c4c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.782 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee08654-188f-4a6a-b2a3-c9a9592b05d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.783 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdee48a2c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.788 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 kernel: tapdee48a2c-20: left promiscuous mode
Dec  3 02:18:19 compute-0 nova_compute[351485]: 2025-12-03 02:18:19.804 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.809 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1c62bca7-615a-4c55-a002-0adbc225a32e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: hostname: compute-0
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.823 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bdd6e4-2854-4496-a4bc-f24c09b3266d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.824 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[208e06ae-941d-4906-8466-5ffd5f508ba8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.842 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe0af3d-15aa-49e4-9de7-8d316a98430b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712364, 'reachable_time': 40285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450514, 'error': None, 'target': 'ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:19 compute-0 systemd[1]: run-netns-ovnmeta\x2ddee48a2c\x2d2a7a\x2d4864\x2d9bd2\x2df42030910aa8.mount: Deactivated successfully.
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.848 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dee48a2c-2a7a-4864-9bd2-f42030910aa8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.848 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[4c13e81d-48a1-45bf-be54-4fa413963953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.849 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 unbound from our chassis#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.851 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dee48a2c-2a7a-4864-9bd2-f42030910aa8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.852 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[325151ab-0728-4fe8-97f2-e70df1916635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.853 288528 INFO neutron.agent.ovn.metadata.agent [-] Port c6f07ea7-978a-46d9-b7f8-a4c14ac8475f in datapath dee48a2c-2a7a-4864-9bd2-f42030910aa8 unbound from our chassis#033[00m
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.854 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dee48a2c-2a7a-4864-9bd2-f42030910aa8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:18:19 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:19.855 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[173cd257-bb52-49ef-ba2e-9a7a43355fd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:19 compute-0 virtnodedevd[351021]: ethtool ioctl error on tapdee48a2c-20: No such device
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.234 351492 INFO nova.virt.libvirt.driver [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deleting instance files /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9_del#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.234 351492 INFO nova.virt.libvirt.driver [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deletion of /var/lib/nova/instances/40db12af-6ca8-4a4f-88e7-833c3fda87c9_del complete#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.289 351492 INFO nova.compute.manager [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.289 351492 DEBUG oslo.service.loopingcall [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.290 351492 DEBUG nova.compute.manager [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.290 351492 DEBUG nova.network.neutron [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.294 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.295 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.85527420043945GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.295 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.295 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.387 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance a48b4084-369d-432a-9f47-9378cdcc011f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.387 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.387 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 1b83725c-0af2-491f-98d9-bdb0ed1a5979 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.388 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 40db12af-6ca8-4a4f-88e7-833c3fda87c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.388 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.388 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.469 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:18:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951564323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:18:20 compute-0 nova_compute[351485]: 2025-12-03 02:18:20.993 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.002 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.025 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.068 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.069 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 296 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 105 op/s
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.583 351492 DEBUG nova.network.neutron [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.602 351492 INFO nova.compute.manager [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Took 1.31 seconds to deallocate network for instance.#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.648 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.649 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.779 351492 DEBUG nova.compute.manager [req-056be785-aa5d-4bf2-85b6-c7c7d66f2803 req-5104f012-ef35-4ecf-b58b-c9088f4494cb 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Received event network-vif-deleted-c6f07ea7-978a-46d9-b7f8-a4c14ac8475f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:21 compute-0 nova_compute[351485]: 2025-12-03 02:18:21.807 351492 DEBUG oslo_concurrency.processutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:21 compute-0 podman[450561]: 2025-12-03 02:18:21.875734944 +0000 UTC m=+0.094151704 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 02:18:21 compute-0 podman[450553]: 2025-12-03 02:18:21.882353992 +0000 UTC m=+0.115404156 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec  3 02:18:21 compute-0 podman[450554]: 2025-12-03 02:18:21.8861786 +0000 UTC m=+0.113490122 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:18:21 compute-0 podman[450552]: 2025-12-03 02:18:21.903395807 +0000 UTC m=+0.150992303 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:18:21 compute-0 podman[450559]: 2025-12-03 02:18:21.924904736 +0000 UTC m=+0.137816071 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=)
Dec  3 02:18:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:18:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349553752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.248 351492 DEBUG oslo_concurrency.processutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.259 351492 DEBUG nova.compute.provider_tree [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.279 351492 DEBUG nova.scheduler.client.report [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.308 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.342 351492 INFO nova.scheduler.client.report [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Deleted allocations for instance 40db12af-6ca8-4a4f-88e7-833c3fda87c9#033[00m
Dec  3 02:18:22 compute-0 nova_compute[351485]: 2025-12-03 02:18:22.454 351492 DEBUG oslo_concurrency.lockutils [None req-c38b8b96-c9d0-4a0d-a420-af324964bdac 085bcee1002d425085c1f09d9b5d3d97 19ab3b60e4c749c7897f20982829cd8c - - default default] Lock "40db12af-6ca8-4a4f-88e7-833c3fda87c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.033 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.070 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.071 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.071 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:18:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 280 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 9.8 KiB/s wr, 122 op/s
Dec  3 02:18:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.438 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.438 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.440 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.440 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:18:23 compute-0 nova_compute[351485]: 2025-12-03 02:18:23.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:24 compute-0 nova_compute[351485]: 2025-12-03 02:18:24.506 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 10 KiB/s wr, 111 op/s
Dec  3 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.480 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [{"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.495 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-a48b4084-369d-432a-9f47-9378cdcc011f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.497 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.498 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.500 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:25 compute-0 nova_compute[351485]: 2025-12-03 02:18:25.501 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.001 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.001 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.251 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.252 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.253 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.254 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.255 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.257 351492 INFO nova.compute.manager [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Terminating instance#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.259 351492 DEBUG nova.compute.manager [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:18:26 compute-0 kernel: tapee5c2dfc-04 (unregistering): left promiscuous mode
Dec  3 02:18:26 compute-0 NetworkManager[48912]: <info>  [1764728306.3639] device (tapee5c2dfc-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.371 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00149|binding|INFO|Releasing lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 from this chassis (sb_readonly=0)
Dec  3 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00150|binding|INFO|Setting lport ee5c2dfc-04c3-400a-8073-6f2c65dcea03 down in Southbound
Dec  3 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00151|binding|INFO|Removing iface tapee5c2dfc-04 ovn-installed in OVS
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.397 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.402 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:dd:2f 10.100.0.9'], port_security=['fa:16:3e:ff:dd:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a48b4084-369d-432a-9f47-9378cdcc011f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b95bb4c57d3543acb25997bedee9dec3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '323d2b87-5691-4e3e-84a4-5fb1ca8c1538', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49517db8-4396-45c4-bc75-59118441fc2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ee5c2dfc-04c3-400a-8073-6f2c65dcea03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.403 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ee5c2dfc-04c3-400a-8073-6f2c65dcea03 in datapath 2fdf214a-0f6e-4e5d-b449-e1988827937a unbound from our chassis#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.406 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fdf214a-0f6e-4e5d-b449-e1988827937a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.408 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.407 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[cee0464f-7532-4038-bb88-1b00bf029523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.410 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a namespace which is not needed anymore#033[00m
Dec  3 02:18:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  3 02:18:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000008.scope: Consumed 46.994s CPU time.
Dec  3 02:18:26 compute-0 systemd-machined[138558]: Machine qemu-11-instance-00000008 terminated.
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.488 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.496 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.506 351492 INFO nova.virt.libvirt.driver [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Instance destroyed successfully.#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.507 351492 DEBUG nova.objects.instance [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lazy-loading 'resources' on Instance uuid a48b4084-369d-432a-9f47-9378cdcc011f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.525 351492 DEBUG nova.virt.libvirt.vif [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-925455337',display_name='tempest-ServerActionsTestJSON-server-925455337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-925455337',id=8,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGOJzr3C/PPi8eniww/uAf5kjbNsdKavxgkZKaJZFgdiLqS6nfAl7iJt2CTK2Uv8oLXiebIMQ1pupDcRRUQudzYxI5uBKdjcX1Ycil7EMv1Jwv4g9nZX8AidJ89XIoqzA==',key_name='tempest-keypair-354319462',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:15:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b95bb4c57d3543acb25997bedee9dec3',ramdisk_id='',reservation_id='r-4j003m20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-225723275',owner_user_name='tempest-ServerActionsTestJSON-225723275-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:17:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='292dd1da4e67424b855327b32f0623b7',uuid=a48b4084-369d-432a-9f47-9378cdcc011f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.526 351492 DEBUG nova.network.os_vif_util [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converting VIF {"id": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "address": "fa:16:3e:ff:dd:2f", "network": {"id": "2fdf214a-0f6e-4e5d-b449-e1988827937a", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-191861003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b95bb4c57d3543acb25997bedee9dec3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5c2dfc-04", "ovs_interfaceid": "ee5c2dfc-04c3-400a-8073-6f2c65dcea03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.527 351492 DEBUG nova.network.os_vif_util [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.528 351492 DEBUG os_vif [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.533 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.533 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee5c2dfc-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.539 351492 INFO os_vif [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:dd:2f,bridge_name='br-int',has_traffic_filtering=True,id=ee5c2dfc-04c3-400a-8073-6f2c65dcea03,network=Network(2fdf214a-0f6e-4e5d-b449-e1988827937a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5c2dfc-04')#033[00m
Dec  3 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : haproxy version is 2.8.14-c23fe91
Dec  3 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [NOTICE]   (448408) : path to executable is /usr/sbin/haproxy
Dec  3 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [WARNING]  (448408) : Exiting Master process...
Dec  3 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [ALERT]    (448408) : Current worker (448410) exited with code 143 (Terminated)
Dec  3 02:18:26 compute-0 neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a[448404]: [WARNING]  (448408) : All workers exited. Exiting... (0)
Dec  3 02:18:26 compute-0 systemd[1]: libpod-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95.scope: Deactivated successfully.
Dec  3 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00152|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:18:26 compute-0 ovn_controller[89134]: 2025-12-03T02:18:26Z|00153|binding|INFO|Releasing lport c8314dfe-5b76-4819-9b3e-1cb76a272253 from this chassis (sb_readonly=0)
Dec  3 02:18:26 compute-0 podman[450709]: 2025-12-03 02:18:26.639025541 +0000 UTC m=+0.083551214 container died df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.671 351492 DEBUG nova.compute.manager [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.671 351492 DEBUG oslo_concurrency.lockutils [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.672 351492 DEBUG oslo_concurrency.lockutils [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.673 351492 DEBUG oslo_concurrency.lockutils [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.673 351492 DEBUG nova.compute.manager [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.674 351492 DEBUG nova.compute.manager [req-e1edd8f1-97ba-4f02-b41c-b2ce0ae4715c req-06430af4-0f34-4816-9d2a-0d3fd2acb7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-unplugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eaead3289de756df5c362e51f445187494ce76bdc94cf33a7cf5eb23ba12419-merged.mount: Deactivated successfully.
Dec  3 02:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95-userdata-shm.mount: Deactivated successfully.
Dec  3 02:18:26 compute-0 podman[450709]: 2025-12-03 02:18:26.714000332 +0000 UTC m=+0.158525975 container cleanup df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:18:26 compute-0 systemd[1]: libpod-conmon-df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95.scope: Deactivated successfully.
Dec  3 02:18:26 compute-0 podman[450751]: 2025-12-03 02:18:26.819039834 +0000 UTC m=+0.070941508 container remove df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.848 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a77aafb8-529d-4704-8c97-e15a7b6c1db1]: (4, ('Wed Dec  3 02:18:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95)\ndf6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95\nWed Dec  3 02:18:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a (df6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95)\ndf6275ac70edd41bbefb03e343167c9cf0112ba253c40eb803e2b1de3bfb5a95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.855 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[28a343d5-08c9-4e75-a2a7-af72dd588151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.856 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fdf214a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:26 compute-0 kernel: tap2fdf214a-00: left promiscuous mode
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.861 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 nova_compute[351485]: 2025-12-03 02:18:26.887 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.890 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a6669048-d2fe-43cc-b40b-218a6166143d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.909 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fde918a2-2f2b-47e8-9c24-0039c9989667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.910 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3cdcb5-d809-4610-b3a1-f4efd3833921]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.929 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[30b5db10-7997-42b5-be3f-73a5127a0d25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707937, 'reachable_time': 18647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450765, 'error': None, 'target': 'ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.932 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fdf214a-0f6e-4e5d-b449-e1988827937a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:18:26 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:26.932 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[e00848c9-3415-4dee-8621-a8ef70cb15bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fdf214a\x2d0f6e\x2d4e5d\x2db449\x2de1988827937a.mount: Deactivated successfully.
Dec  3 02:18:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 10 KiB/s wr, 87 op/s
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.245 351492 INFO nova.virt.libvirt.driver [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deleting instance files /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f_del#033[00m
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.246 351492 INFO nova.virt.libvirt.driver [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deletion of /var/lib/nova/instances/a48b4084-369d-432a-9f47-9378cdcc011f_del complete#033[00m
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.338 351492 INFO nova.compute.manager [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.339 351492 DEBUG oslo.service.loopingcall [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.339 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.340 351492 DEBUG nova.compute.manager [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.340 351492 DEBUG nova.network.neutron [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:18:27 compute-0 nova_compute[351485]: 2025-12-03 02:18:27.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:18:28
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec  3 02:18:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.668 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.841 351492 DEBUG nova.compute.manager [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.841 351492 DEBUG oslo_concurrency.lockutils [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 DEBUG oslo_concurrency.lockutils [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 DEBUG oslo_concurrency.lockutils [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 DEBUG nova.compute.manager [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] No waiting events found dispatching network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:28 compute-0 nova_compute[351485]: 2025-12-03 02:18:28.842 351492 WARNING nova.compute.manager [req-185ffd2d-e7e7-4ec9-8eed-f86582208110 req-497c0185-20aa-45f2-abde-06f9a3edb994 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received unexpected event network-vif-plugged-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 for instance with vm_state active and task_state deleting.#033[00m
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 264 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 10 KiB/s wr, 65 op/s
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:18:29 compute-0 ceph-mgr[193109]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1922561230
Dec  3 02:18:29 compute-0 podman[158098]: time="2025-12-03T02:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43812 "" "Go-http-client/1.1"
Dec  3 02:18:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 02:18:30 compute-0 nova_compute[351485]: 2025-12-03 02:18:30.986 351492 DEBUG nova.network.neutron [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.012 351492 INFO nova.compute.manager [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Took 3.67 seconds to deallocate network for instance.#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.073 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.074 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 208 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 11 KiB/s wr, 91 op/s
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.123 351492 DEBUG nova.compute.manager [req-47e016ad-955b-4293-a522-39a5f4c36865 req-b68ed872-9658-497d-907b-c11b18d04327 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Received event network-vif-deleted-ee5c2dfc-04c3-400a-8073-6f2c65dcea03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.222 351492 DEBUG oslo_concurrency.processutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:18:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:18:31 compute-0 openstack_network_exporter[368278]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:18:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.466 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:18:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476211608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.734 351492 DEBUG oslo_concurrency.processutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.748 351492 DEBUG nova.compute.provider_tree [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.782 351492 DEBUG nova.scheduler.client.report [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.827 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.852 351492 INFO nova.scheduler.client.report [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Deleted allocations for instance a48b4084-369d-432a-9f47-9378cdcc011f#033[00m
Dec  3 02:18:31 compute-0 nova_compute[351485]: 2025-12-03 02:18:31.926 351492 DEBUG oslo_concurrency.lockutils [None req-00aa2088-08d5-417d-9621-1c36b98c7878 292dd1da4e67424b855327b32f0623b7 b95bb4c57d3543acb25997bedee9dec3 - - default default] Lock "a48b4084-369d-432a-9f47-9378cdcc011f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 609 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Dec  3 02:18:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:33 compute-0 nova_compute[351485]: 2025-12-03 02:18:33.670 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.475 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728299.4715793, 40db12af-6ca8-4a4f-88e7-833c3fda87c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.475 351492 INFO nova.compute.manager [-] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.495 351492 DEBUG nova.compute.manager [None req-dbc4c0f7-0844-45b4-aef4-abf6a3f47e65 - - - - - -] [instance: 40db12af-6ca8-4a4f-88e7-833c3fda87c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:18:34 compute-0 nova_compute[351485]: 2025-12-03 02:18:34.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:18:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Dec  3 02:18:35 compute-0 nova_compute[351485]: 2025-12-03 02:18:35.970 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:36 compute-0 nova_compute[351485]: 2025-12-03 02:18:36.539 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec  3 02:18:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:38 compute-0 nova_compute[351485]: 2025-12-03 02:18:38.479 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:38 compute-0 nova_compute[351485]: 2025-12-03 02:18:38.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011079409023572312 of space, bias 1.0, pg target 0.33238227070716936 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:18:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:18:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:18:40 compute-0 ovn_controller[89134]: 2025-12-03T02:18:40Z|00154|binding|INFO|Releasing lport 4fe53946-9a81-46d3-946d-3676da417bd6 from this chassis (sb_readonly=0)
Dec  3 02:18:40 compute-0 nova_compute[351485]: 2025-12-03 02:18:40.864 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.503 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728306.5019524, a48b4084-369d-432a-9f47-9378cdcc011f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.505 351492 INFO nova.compute.manager [-] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.536 351492 DEBUG nova.compute.manager [None req-4d284a25-a9ed-4fc5-a505-8f0c8034ecb5 - - - - - -] [instance: a48b4084-369d-432a-9f47-9378cdcc011f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:18:41 compute-0 nova_compute[351485]: 2025-12-03 02:18:41.543 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:41 compute-0 podman[450791]: 2025-12-03 02:18:41.869857437 +0000 UTC m=+0.097557911 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:18:41 compute-0 podman[450790]: 2025-12-03 02:18:41.879218452 +0000 UTC m=+0.117267519 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 02:18:41 compute-0 podman[450789]: 2025-12-03 02:18:41.897718985 +0000 UTC m=+0.142763200 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 02:18:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:42.243 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:42 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:42.244 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:18:42 compute-0 nova_compute[351485]: 2025-12-03 02:18:42.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:42 compute-0 nova_compute[351485]: 2025-12-03 02:18:42.629 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 183 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Dec  3 02:18:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:43 compute-0 nova_compute[351485]: 2025-12-03 02:18:43.675 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:44 compute-0 ovn_controller[89134]: 2025-12-03T02:18:44Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:c0:50 10.100.0.14
Dec  3 02:18:44 compute-0 ovn_controller[89134]: 2025-12-03T02:18:44Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:c0:50 10.100.0.14
Dec  3 02:18:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 188 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 892 KiB/s wr, 18 op/s
Dec  3 02:18:46 compute-0 nova_compute[351485]: 2025-12-03 02:18:46.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:18:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev ccc24c86-1339-4fb4-96ad-f7b29a8ad047 does not exist
Dec  3 02:18:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d6da5496-a1b5-4a25-b9a0-e610eed8d84d does not exist
Dec  3 02:18:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5f55d163-2b2f-4f04-b646-e92deaa75033 does not exist
Dec  3 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:18:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:18:46 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3168155945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:18:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:18:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3168155945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:18:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 207 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec  3 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:18:47 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:18:47 compute-0 podman[451113]: 2025-12-03 02:18:47.861600823 +0000 UTC m=+0.089806462 container create b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:18:47 compute-0 podman[451113]: 2025-12-03 02:18:47.825596364 +0000 UTC m=+0.053802043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:18:47 compute-0 systemd[1]: Started libpod-conmon-b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf.scope.
Dec  3 02:18:48 compute-0 nova_compute[351485]: 2025-12-03 02:18:48.003 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.046678499 +0000 UTC m=+0.274884178 container init b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.066221882 +0000 UTC m=+0.294427521 container start b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.073210749 +0000 UTC m=+0.301416388 container attach b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 02:18:48 compute-0 ecstatic_bohr[451128]: 167 167
Dec  3 02:18:48 compute-0 systemd[1]: libpod-b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf.scope: Deactivated successfully.
Dec  3 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.08242141 +0000 UTC m=+0.310627049 container died b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f337f5ecc1c7475be1201222517795ff10fcc08b4a0f5b892fb3410f27685b56-merged.mount: Deactivated successfully.
Dec  3 02:18:48 compute-0 podman[451113]: 2025-12-03 02:18:48.156751743 +0000 UTC m=+0.384957352 container remove b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:18:48 compute-0 podman[451129]: 2025-12-03 02:18:48.157881095 +0000 UTC m=+0.178680096 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  3 02:18:48 compute-0 systemd[1]: libpod-conmon-b247276afa4dd6677cbd7398ba98d92f6c91d0086a1aab7b3c0aa7ef696df8bf.scope: Deactivated successfully.
Dec  3 02:18:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.444029531 +0000 UTC m=+0.096217354 container create 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.40547567 +0000 UTC m=+0.057663493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:18:48 compute-0 systemd[1]: Started libpod-conmon-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope.
Dec  3 02:18:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.625153105 +0000 UTC m=+0.277340988 container init 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.659115216 +0000 UTC m=+0.311303029 container start 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 02:18:48 compute-0 podman[451171]: 2025-12-03 02:18:48.668294106 +0000 UTC m=+0.320481939 container attach 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:18:48 compute-0 nova_compute[351485]: 2025-12-03 02:18:48.684 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 207 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Dec  3 02:18:49 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:49.247 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:50 compute-0 elegant_curie[451187]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:18:50 compute-0 elegant_curie[451187]: --> relative data size: 1.0
Dec  3 02:18:50 compute-0 elegant_curie[451187]: --> All data devices are unavailable
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.018 351492 INFO nova.compute.manager [None req-c1491505-ac29-471f-a2da-cce3edf0bc7c abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Get console output#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.035 448603 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  3 02:18:50 compute-0 systemd[1]: libpod-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope: Deactivated successfully.
Dec  3 02:18:50 compute-0 systemd[1]: libpod-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope: Consumed 1.313s CPU time.
Dec  3 02:18:50 compute-0 podman[451171]: 2025-12-03 02:18:50.057208112 +0000 UTC m=+1.709395945 container died 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6072070acb38af8281298d039c3898d81876eac213e5a06635fded9d63306b5-merged.mount: Deactivated successfully.
Dec  3 02:18:50 compute-0 podman[451171]: 2025-12-03 02:18:50.155931536 +0000 UTC m=+1.808119349 container remove 4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:18:50 compute-0 systemd[1]: libpod-conmon-4eaea05378a28ba5d0a5eadb93b8adc2b13bb5a54f9f0b8d786adc2494cf5b44.scope: Deactivated successfully.
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.491 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.492 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.492 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.492 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.493 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.494 351492 INFO nova.compute.manager [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Terminating instance#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.496 351492 DEBUG nova.compute.manager [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:18:50 compute-0 kernel: tap025b4c8a-b3 (unregistering): left promiscuous mode
Dec  3 02:18:50 compute-0 NetworkManager[48912]: <info>  [1764728330.6132] device (tap025b4c8a-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:18:50 compute-0 ovn_controller[89134]: 2025-12-03T02:18:50Z|00155|binding|INFO|Releasing lport 025b4c8a-b3c9-4114-95f7-f17506286d3e from this chassis (sb_readonly=0)
Dec  3 02:18:50 compute-0 ovn_controller[89134]: 2025-12-03T02:18:50Z|00156|binding|INFO|Setting lport 025b4c8a-b3c9-4114-95f7-f17506286d3e down in Southbound
Dec  3 02:18:50 compute-0 ovn_controller[89134]: 2025-12-03T02:18:50Z|00157|binding|INFO|Removing iface tap025b4c8a-b3 ovn-installed in OVS
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.630 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.633 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.643 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:c0:50 10.100.0.14'], port_security=['fa:16:3e:24:c0:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1b83725c-0af2-491f-98d9-bdb0ed1a5979', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0897a5e4-2e8b-4479-bdb4-a75dc9f6f9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=025b4c8a-b3c9-4114-95f7-f17506286d3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.644 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 025b4c8a-b3c9-4114-95f7-f17506286d3e in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.646 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed008f09-da46-4507-9be2-7398a4728121#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.667 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f07421f1-b485-4c76-b750-d513c20c3b91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  3 02:18:50 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 40.031s CPU time.
Dec  3 02:18:50 compute-0 systemd-machined[138558]: Machine qemu-12-instance-0000000b terminated.
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.702 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfef608-d268-4680-b611-5e09fcfdceeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.706 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[47dea3c9-81ed-46c4-af1d-3c9eb708b7a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.744 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[4d28b4fe-148d-4675-a709-4c323003ca82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.751 351492 INFO nova.virt.libvirt.driver [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Instance destroyed successfully.#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.752 351492 DEBUG nova.objects.instance [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'resources' on Instance uuid 1b83725c-0af2-491f-98d9-bdb0ed1a5979 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.769 351492 DEBUG nova.virt.libvirt.vif [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:17:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-455653039',display_name='tempest-TestNetworkBasicOps-server-455653039',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-455653039',id=11,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGyLxdmoeScEfSkwzcCczvmCyzQ7WX6pYr3KymEzB5Q09G09n6d3TfahDx7L4JUEY5sh67bwZpAZn3mmGdgttDtWP8gJ/ON+rMTVTFtEqftauFytQHqZZbMU6xxCGBZ6yA==',key_name='tempest-TestNetworkBasicOps-378472767',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:18:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-ux5cl6xd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:18:08Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=1b83725c-0af2-491f-98d9-bdb0ed1a5979,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.770 351492 DEBUG nova.network.os_vif_util [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "address": "fa:16:3e:24:c0:50", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap025b4c8a-b3", "ovs_interfaceid": "025b4c8a-b3c9-4114-95f7-f17506286d3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.771 351492 DEBUG nova.network.os_vif_util [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.772 351492 DEBUG os_vif [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.775 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.775 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap025b4c8a-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.782 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.786 351492 INFO os_vif [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:c0:50,bridge_name='br-int',has_traffic_filtering=True,id=025b4c8a-b3c9-4114-95f7-f17506286d3e,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap025b4c8a-b3')#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.788 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6d550af5-7a77-4a58-942b-2b324d3f8775]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped008f09-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:11:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704212, 'reachable_time': 40538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451351, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.813 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3d494fdd-3f66-46e4-b9ca-9276aaeae14c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704225, 'tstamp': 704225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451357, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped008f09-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704229, 'tstamp': 704229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451357, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.820 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.824 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped008f09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.824 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.825 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped008f09-d0, col_values=(('external_ids', {'iface-id': '4fe53946-9a81-46d3-946d-3676da417bd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:50 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:50.825 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:18:50 compute-0 nova_compute[351485]: 2025-12-03 02:18:50.828 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 215 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec  3 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.260164818 +0000 UTC m=+0.082384262 container create 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.227245236 +0000 UTC m=+0.049464710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:18:51 compute-0 systemd[1]: Started libpod-conmon-719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd.scope.
Dec  3 02:18:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.405032326 +0000 UTC m=+0.227251810 container init 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.416872971 +0000 UTC m=+0.239092375 container start 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.422248414 +0000 UTC m=+0.244467908 container attach 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:18:51 compute-0 jolly_banach[451423]: 167 167
Dec  3 02:18:51 compute-0 systemd[1]: libpod-719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd.scope: Deactivated successfully.
Dec  3 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.429619232 +0000 UTC m=+0.251838656 container died 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 02:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b5efd33fb09da945d54c1a3c280ad3e739088a1ada9324e13b3d2a97b31600-merged.mount: Deactivated successfully.
Dec  3 02:18:51 compute-0 podman[451407]: 2025-12-03 02:18:51.501491876 +0000 UTC m=+0.323711290 container remove 719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:18:51 compute-0 systemd[1]: libpod-conmon-719f78c69519ea6e734984c7e044551cab1ae36074277c90719c7e16d25750bd.scope: Deactivated successfully.
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.554 351492 INFO nova.virt.libvirt.driver [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deleting instance files /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979_del#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.555 351492 INFO nova.virt.libvirt.driver [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deletion of /var/lib/nova/instances/1b83725c-0af2-491f-98d9-bdb0ed1a5979_del complete#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.630 351492 INFO nova.compute.manager [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.631 351492 DEBUG oslo.service.loopingcall [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.632 351492 DEBUG nova.compute.manager [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.633 351492 DEBUG nova.network.neutron [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:18:51 compute-0 podman[451446]: 2025-12-03 02:18:51.779910372 +0000 UTC m=+0.093595979 container create eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 02:18:51 compute-0 podman[451446]: 2025-12-03 02:18:51.739477788 +0000 UTC m=+0.053163445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.853 351492 DEBUG nova.compute.manager [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-unplugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.854 351492 DEBUG oslo_concurrency.lockutils [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.854 351492 DEBUG oslo_concurrency.lockutils [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.854 351492 DEBUG oslo_concurrency.lockutils [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.855 351492 DEBUG nova.compute.manager [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] No waiting events found dispatching network-vif-unplugged-025b4c8a-b3c9-4114-95f7-f17506286d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:51 compute-0 nova_compute[351485]: 2025-12-03 02:18:51.855 351492 DEBUG nova.compute.manager [req-dbf2689a-c850-41d7-b5f5-d06d8aa8a044 req-aa52b054-4c2c-4d78-a40c-6d581b3b86b1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-unplugged-025b4c8a-b3c9-4114-95f7-f17506286d3e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:18:51 compute-0 systemd[1]: Started libpod-conmon-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope.
Dec  3 02:18:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.007164712 +0000 UTC m=+0.320850289 container init eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.025780548 +0000 UTC m=+0.339466115 container start eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.030105171 +0000 UTC m=+0.343790738 container attach eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:18:52 compute-0 podman[451488]: 2025-12-03 02:18:52.06649715 +0000 UTC m=+0.094661229 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, name=ubi9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 02:18:52 compute-0 podman[451465]: 2025-12-03 02:18:52.07426967 +0000 UTC m=+0.136549704 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 02:18:52 compute-0 podman[451463]: 2025-12-03 02:18:52.08699372 +0000 UTC m=+0.173767437 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:18:52 compute-0 podman[451472]: 2025-12-03 02:18:52.087221657 +0000 UTC m=+0.144720716 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:18:52 compute-0 podman[451485]: 2025-12-03 02:18:52.158279227 +0000 UTC m=+0.178351947 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.541 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.541 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.586 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.643 351492 DEBUG nova.network.neutron [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.668 351492 INFO nova.compute.manager [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Took 1.04 seconds to deallocate network for instance.#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.694 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.694 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.705 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.705 351492 INFO nova.compute.claims [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.713 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.802 351492 DEBUG nova.compute.manager [req-cbb8f252-7cfe-4cbc-8613-b5dc10cb0ab3 req-aad5c0ed-b7b5-4e9e-9bd4-2bf4878579fe 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-deleted-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:52 compute-0 nova_compute[351485]: 2025-12-03 02:18:52.859 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:52 compute-0 modest_beaver[451462]: {
Dec  3 02:18:52 compute-0 modest_beaver[451462]:    "0": [
Dec  3 02:18:52 compute-0 modest_beaver[451462]:        {
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "devices": [
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "/dev/loop3"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            ],
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_name": "ceph_lv0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_size": "21470642176",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "name": "ceph_lv0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "tags": {
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cluster_name": "ceph",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.crush_device_class": "",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.encrypted": "0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osd_id": "0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.type": "block",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.vdo": "0"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            },
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "type": "block",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "vg_name": "ceph_vg0"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:        }
Dec  3 02:18:52 compute-0 modest_beaver[451462]:    ],
Dec  3 02:18:52 compute-0 modest_beaver[451462]:    "1": [
Dec  3 02:18:52 compute-0 modest_beaver[451462]:        {
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "devices": [
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "/dev/loop4"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            ],
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_name": "ceph_lv1",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_size": "21470642176",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "name": "ceph_lv1",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "tags": {
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cluster_name": "ceph",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.crush_device_class": "",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.encrypted": "0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osd_id": "1",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.type": "block",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.vdo": "0"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            },
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "type": "block",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "vg_name": "ceph_vg1"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:        }
Dec  3 02:18:52 compute-0 modest_beaver[451462]:    ],
Dec  3 02:18:52 compute-0 modest_beaver[451462]:    "2": [
Dec  3 02:18:52 compute-0 modest_beaver[451462]:        {
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "devices": [
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "/dev/loop5"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            ],
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_name": "ceph_lv2",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_size": "21470642176",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "name": "ceph_lv2",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "tags": {
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.cluster_name": "ceph",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.crush_device_class": "",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.encrypted": "0",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osd_id": "2",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.type": "block",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:                "ceph.vdo": "0"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            },
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "type": "block",
Dec  3 02:18:52 compute-0 modest_beaver[451462]:            "vg_name": "ceph_vg2"
Dec  3 02:18:52 compute-0 modest_beaver[451462]:        }
Dec  3 02:18:52 compute-0 modest_beaver[451462]:    ]
Dec  3 02:18:52 compute-0 modest_beaver[451462]: }
Dec  3 02:18:52 compute-0 systemd[1]: libpod-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope: Deactivated successfully.
Dec  3 02:18:52 compute-0 conmon[451462]: conmon eaf2a50f378137f8ecb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope/container/memory.events
Dec  3 02:18:52 compute-0 podman[451446]: 2025-12-03 02:18:52.919167765 +0000 UTC m=+1.232853372 container died eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d5aaf5dd7bcf13a082bc604a5637d0d221f6438bf0240283d7dc0ee11380013-merged.mount: Deactivated successfully.
Dec  3 02:18:53 compute-0 podman[451446]: 2025-12-03 02:18:53.020872593 +0000 UTC m=+1.334558160 container remove eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 02:18:53 compute-0 systemd[1]: libpod-conmon-eaf2a50f378137f8ecb28eb4ee679a0e41ad75b4425ad6fb0292fa17d76ccd31.scope: Deactivated successfully.
Dec  3 02:18:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 191 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec  3 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  3 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  3 02:18:53 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  3 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:18:53 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833856092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.481 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.489 351492 DEBUG nova.compute.provider_tree [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.522 351492 DEBUG nova.scheduler.client.report [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.542 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.543 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.544 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.604 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.605 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.626 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.632 351492 DEBUG oslo_concurrency.processutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.676 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.686 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.773 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.775 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.775 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Creating image(s)#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.819 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.889 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.944 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.956 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.996 351492 DEBUG nova.compute.manager [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG oslo_concurrency.lockutils [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG oslo_concurrency.lockutils [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG oslo_concurrency.lockutils [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.997 351492 DEBUG nova.compute.manager [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] No waiting events found dispatching network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:53 compute-0 nova_compute[351485]: 2025-12-03 02:18:53.998 351492 WARNING nova.compute.manager [req-a2b3d20b-0bb1-4346-a414-cfa35427221b req-6f10eff8-1dbe-4c55-b0b1-973418f513a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Received unexpected event network-vif-plugged-025b4c8a-b3c9-4114-95f7-f17506286d3e for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.030 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.030 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "d68b22249947adf9ae6139a52d3c87b68df8a601" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.031 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.031 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "d68b22249947adf9ae6139a52d3c87b68df8a601" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.070 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.086 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 48201127-9aa0-4cde-a41d-6790411480a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.112 351492 DEBUG nova.policy [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2de48f7608ea45c8ac558125d72373c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:18:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:18:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537960280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.165 351492 DEBUG oslo_concurrency.processutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.171617461 +0000 UTC m=+0.068038826 container create 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.176 351492 DEBUG nova.compute.provider_tree [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.198 351492 DEBUG nova.scheduler.client.report [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:18:54 compute-0 systemd[1]: Started libpod-conmon-6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5.scope.
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.232 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.148147057 +0000 UTC m=+0.044568452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:18:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.273 351492 INFO nova.scheduler.client.report [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Deleted allocations for instance 1b83725c-0af2-491f-98d9-bdb0ed1a5979#033[00m
Dec  3 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.302784942 +0000 UTC m=+0.199206377 container init 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.318396423 +0000 UTC m=+0.214817808 container start 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:18:54 compute-0 reverent_aryabhata[451876]: 167 167
Dec  3 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.329506268 +0000 UTC m=+0.225927663 container attach 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 02:18:54 compute-0 systemd[1]: libpod-6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5.scope: Deactivated successfully.
Dec  3 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.332299077 +0000 UTC m=+0.228720482 container died 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.348 351492 DEBUG oslo_concurrency.lockutils [None req-4ea5627b-e29e-4683-bf3a-460ae2137bcf abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "1b83725c-0af2-491f-98d9-bdb0ed1a5979" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e6ab937557593bcf0fd400e047d813bc9be199a329879dc9e782760395df42-merged.mount: Deactivated successfully.
Dec  3 02:18:54 compute-0 podman[451844]: 2025-12-03 02:18:54.4179508 +0000 UTC m=+0.314372165 container remove 6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_aryabhata, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:18:54 compute-0 systemd[1]: libpod-conmon-6c23742155cfe6f762216931cc03e46e5aee8ec74cdffb73b46638890e4dc9f5.scope: Deactivated successfully.
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.511 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601 48201127-9aa0-4cde-a41d-6790411480a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.677 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] resizing rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.680608082 +0000 UTC m=+0.088022362 container create db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.639506789 +0000 UTC m=+0.046921099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:18:54 compute-0 systemd[1]: Started libpod-conmon-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope.
Dec  3 02:18:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.830993536 +0000 UTC m=+0.238407896 container init db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.851446995 +0000 UTC m=+0.258861315 container start db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:18:54 compute-0 podman[451919]: 2025-12-03 02:18:54.857718873 +0000 UTC m=+0.265133193 container attach db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.938 351492 DEBUG nova.objects.instance [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 48201127-9aa0-4cde-a41d-6790411480a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.967 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.968 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Ensure instance console log exists: /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.969 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.970 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:54 compute-0 nova_compute[351485]: 2025-12-03 02:18:54.970 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 161 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.3 MiB/s wr, 100 op/s
Dec  3 02:18:55 compute-0 nova_compute[351485]: 2025-12-03 02:18:55.778 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]: {
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "osd_id": 2,
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "type": "bluestore"
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:    },
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "osd_id": 1,
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "type": "bluestore"
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:    },
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "osd_id": 0,
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:        "type": "bluestore"
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]:    }
Dec  3 02:18:56 compute-0 eloquent_sinoussi[451973]: }
Dec  3 02:18:56 compute-0 systemd[1]: libpod-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope: Deactivated successfully.
Dec  3 02:18:56 compute-0 podman[451919]: 2025-12-03 02:18:56.09506451 +0000 UTC m=+1.502478790 container died db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:18:56 compute-0 systemd[1]: libpod-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope: Consumed 1.230s CPU time.
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.172 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Successfully created port: 0d927baf-41d2-458f-b4c0-1218ba0eec13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f95e1b831b69e7ea0a8cc852cf18890219995a9d970b4f9f96f89384b9a9719-merged.mount: Deactivated successfully.
Dec  3 02:18:56 compute-0 podman[451919]: 2025-12-03 02:18:56.222369002 +0000 UTC m=+1.629783272 container remove db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 02:18:56 compute-0 systemd[1]: libpod-conmon-db8cbba09ee762fecdbb08fdd21336046794065fb2d00d1bc34d758fc2b4aee5.scope: Deactivated successfully.
Dec  3 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  3 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  3 02:18:56 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  3 02:18:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:18:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:18:56 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:18:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 592e8a07-ea11-4731-9439-ea8d4cdd9bea does not exist
Dec  3 02:18:56 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c84b78c6-74fa-45e6-93ec-ed080e427690 does not exist
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.382 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.382 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.383 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.383 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.383 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.384 351492 INFO nova.compute.manager [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Terminating instance#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.386 351492 DEBUG nova.compute.manager [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:18:56 compute-0 kernel: tapae5db7e6-7a (unregistering): left promiscuous mode
Dec  3 02:18:56 compute-0 NetworkManager[48912]: <info>  [1764728336.5130] device (tapae5db7e6-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00158|binding|INFO|Releasing lport ae5db7e6-7a7a-4116-954a-be851ee02864 from this chassis (sb_readonly=0)
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.535 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00159|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 down in Southbound
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00160|binding|INFO|Removing iface tapae5db7e6-7a ovn-installed in OVS
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.540 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.546 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.547 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.549 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed008f09-da46-4507-9be2-7398a4728121, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.551 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[89551710-9ee5-41dc-8639-97b953d73237]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.552 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 namespace which is not needed anymore#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.573 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  3 02:18:56 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 52.546s CPU time.
Dec  3 02:18:56 compute-0 systemd-machined[138558]: Machine qemu-10-instance-0000000a terminated.
Dec  3 02:18:56 compute-0 kernel: tapae5db7e6-7a: entered promiscuous mode
Dec  3 02:18:56 compute-0 kernel: tapae5db7e6-7a (unregistering): left promiscuous mode
Dec  3 02:18:56 compute-0 NetworkManager[48912]: <info>  [1764728336.6213] manager: (tapae5db7e6-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00161|binding|INFO|Claiming lport ae5db7e6-7a7a-4116-954a-be851ee02864 for this chassis.
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00162|binding|INFO|ae5db7e6-7a7a-4116-954a-be851ee02864: Claiming fa:16:3e:ed:5c:3e 10.100.0.3
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.642 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.655 351492 INFO nova.virt.libvirt.driver [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Instance destroyed successfully.#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.656 351492 DEBUG nova.objects.instance [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lazy-loading 'resources' on Instance uuid 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00163|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 ovn-installed in OVS
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00164|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 up in Southbound
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00165|binding|INFO|Releasing lport ae5db7e6-7a7a-4116-954a-be851ee02864 from this chassis (sb_readonly=1)
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00166|binding|INFO|Removing iface tapae5db7e6-7a ovn-installed in OVS
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00167|if_status|INFO|Not setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 down as sb is readonly
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.671 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.676 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00168|binding|INFO|Releasing lport ae5db7e6-7a7a-4116-954a-be851ee02864 from this chassis (sb_readonly=0)
Dec  3 02:18:56 compute-0 ovn_controller[89134]: 2025-12-03T02:18:56Z|00169|binding|INFO|Setting lport ae5db7e6-7a7a-4116-954a-be851ee02864 down in Southbound
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.679 351492 DEBUG nova.virt.libvirt.vif [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:16:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141861820',display_name='tempest-TestNetworkBasicOps-server-2141861820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141861820',id=10,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDI3XAJe/oWUFcBwASHQKy1+64OXjmmyB8m7y5N7HAPNoYJg/K1iQtuEUIT2NyhA+m3otLmx2JBqvfSdTGVgxCze3o124/xouvwXfOAKv+FU1Zz518hn/q6Xt9p0SK00+w==',key_name='tempest-TestNetworkBasicOps-1925623369',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:16:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8f8e5d142604e8c8aabf1e14a1467ca',ramdisk_id='',reservation_id='r-90hgdj1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1039072813',owner_user_name='tempest-TestNetworkBasicOps-1039072813-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:16:51Z,user_data=None,user_id='abdbefadac2a4d98bd33ed8a1a60ff75',uuid=8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.679 351492 DEBUG nova.network.os_vif_util [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converting VIF {"id": "ae5db7e6-7a7a-4116-954a-be851ee02864", "address": "fa:16:3e:ed:5c:3e", "network": {"id": "ed008f09-da46-4507-9be2-7398a4728121", "bridge": "br-int", "label": "tempest-network-smoke--628634883", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8f8e5d142604e8c8aabf1e14a1467ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae5db7e6-7a", "ovs_interfaceid": "ae5db7e6-7a7a-4116-954a-be851ee02864", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.680 351492 DEBUG nova.network.os_vif_util [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.681 351492 DEBUG os_vif [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.683 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.684 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae5db7e6-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.685 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.686 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:5c:3e 10.100.0.3'], port_security=['fa:16:3e:ed:5c:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed008f09-da46-4507-9be2-7398a4728121', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8f8e5d142604e8c8aabf1e14a1467ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '727984b7-e6f0-4093-a68a-8a566271e9dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15a0724e-2d9f-4375-b3ec-7cde297fca09, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=ae5db7e6-7a7a-4116-954a-be851ee02864) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.688 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.695 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.697 351492 INFO os_vif [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:5c:3e,bridge_name='br-int',has_traffic_filtering=True,id=ae5db7e6-7a7a-4116-954a-be851ee02864,network=Network(ed008f09-da46-4507-9be2-7398a4728121),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae5db7e6-7a')#033[00m
Dec  3 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : haproxy version is 2.8.14-c23fe91
Dec  3 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [NOTICE]   (447659) : path to executable is /usr/sbin/haproxy
Dec  3 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [WARNING]  (447659) : Exiting Master process...
Dec  3 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [ALERT]    (447659) : Current worker (447661) exited with code 143 (Terminated)
Dec  3 02:18:56 compute-0 neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121[447655]: [WARNING]  (447659) : All workers exited. Exiting... (0)
Dec  3 02:18:56 compute-0 systemd[1]: libpod-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57.scope: Deactivated successfully.
Dec  3 02:18:56 compute-0 podman[452116]: 2025-12-03 02:18:56.786212205 +0000 UTC m=+0.070715922 container died abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 02:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57-userdata-shm.mount: Deactivated successfully.
Dec  3 02:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6310634a2e9b69b7fec86a833550521f2d887dce434572f35b449a118a1fc6ac-merged.mount: Deactivated successfully.
Dec  3 02:18:56 compute-0 podman[452116]: 2025-12-03 02:18:56.84258115 +0000 UTC m=+0.127084857 container cleanup abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:18:56 compute-0 systemd[1]: libpod-conmon-abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57.scope: Deactivated successfully.
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.919 351492 DEBUG nova.compute.manager [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.919 351492 DEBUG oslo_concurrency.lockutils [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG oslo_concurrency.lockutils [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG oslo_concurrency.lockutils [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG nova.compute.manager [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.920 351492 DEBUG nova.compute.manager [req-550471a7-14f3-4fd8-9b1f-e145a29c780f req-b7b7d5c2-88ea-4384-a626-2769296d1805 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:18:56 compute-0 podman[452161]: 2025-12-03 02:18:56.958773167 +0000 UTC m=+0.069766435 container remove abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.968 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd9360d-ab90-4753-8593-6569c66ba2a8]: (4, ('Wed Dec  3 02:18:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 (abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57)\nabc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57\nWed Dec  3 02:18:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 (abc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57)\nabc133411443d1571c13e1b8a96c81b8811797a052a8fda9f3f684f98f6fbf57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.970 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[f83f16fb-7c33-4ba4-94d8-facca821f446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.972 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped008f09-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:18:56 compute-0 kernel: taped008f09-d0: left promiscuous mode
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.979 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 nova_compute[351485]: 2025-12-03 02:18:56.993 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:56 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:56.995 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e253b9f8-b5d2-4bc1-865e-4df78aba807a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.011 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3f645acd-e73e-4af8-9e54-c2e71c65dcf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.012 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e85b816c-df94-41e7-b994-9a47d978bdfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.034 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[35d4a584-d369-487a-9fa0-a280b8b8c9b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704204, 'reachable_time': 32145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452179, 'error': None, 'target': 'ovnmeta-ed008f09-da46-4507-9be2-7398a4728121', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.038 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed008f09-da46-4507-9be2-7398a4728121 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.038 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[135ceaa4-52d7-4673-9488-201e60bcb061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:57 compute-0 systemd[1]: run-netns-ovnmeta\x2ded008f09\x2dda46\x2d4507\x2d9be2\x2d7398a4728121.mount: Deactivated successfully.
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.040 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.043 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed008f09-da46-4507-9be2-7398a4728121, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.044 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef041e4-60b5-4855-b632-cf5922d7441a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.045 288528 INFO neutron.agent.ovn.metadata.agent [-] Port ae5db7e6-7a7a-4116-954a-be851ee02864 in datapath ed008f09-da46-4507-9be2-7398a4728121 unbound from our chassis#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.047 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed008f09-da46-4507-9be2-7398a4728121, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:18:57 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:57.048 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[40b828cc-1ffd-42a7-8c8c-9f5cd7cbe296]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:18:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 199 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 5.0 MiB/s wr, 150 op/s
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.285 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Successfully updated port: 0d927baf-41d2-458f-b4c0-1218ba0eec13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:18:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:18:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.299 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.299 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquired lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.299 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.486 351492 DEBUG nova.compute.manager [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.486 351492 DEBUG nova.compute.manager [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing instance network info cache due to event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.487 351492 DEBUG oslo_concurrency.lockutils [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.527 351492 INFO nova.virt.libvirt.driver [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deleting instance files /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_del#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.528 351492 INFO nova.virt.libvirt.driver [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deletion of /var/lib/nova/instances/8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592_del complete#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.535 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.635 351492 INFO nova.compute.manager [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.636 351492 DEBUG oslo.service.loopingcall [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.637 351492 DEBUG nova.compute.manager [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:18:57 compute-0 nova_compute[351485]: 2025-12-03 02:18:57.637 351492 DEBUG nova.network.neutron [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:18:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.346114) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338346187, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 727, "num_deletes": 251, "total_data_size": 862929, "memory_usage": 876664, "flush_reason": "Manual Compaction"}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338357293, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 855531, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39387, "largest_seqno": 40113, "table_properties": {"data_size": 851729, "index_size": 1582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 7651, "raw_average_key_size": 17, "raw_value_size": 844093, "raw_average_value_size": 1892, "num_data_blocks": 70, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728284, "oldest_key_time": 1764728284, "file_creation_time": 1764728338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 11256 microseconds, and 6042 cpu microseconds.
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.357380) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 855531 bytes OK
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.357405) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.360170) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.360193) EVENT_LOG_v1 {"time_micros": 1764728338360186, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.360215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 859173, prev total WAL file size 859173, number of live WAL files 2.
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.361350) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(835KB)], [92(6911KB)]
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338361438, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 7932690, "oldest_snapshot_seqno": -1}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5531 keys, 7196774 bytes, temperature: kUnknown
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338418356, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7196774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7162279, "index_size": 19537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 143997, "raw_average_key_size": 26, "raw_value_size": 7064601, "raw_average_value_size": 1277, "num_data_blocks": 776, "num_entries": 5531, "num_filter_entries": 5531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.419229) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7196774 bytes
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.422701) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.8 rd, 125.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.7 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(17.7) write-amplify(8.4) OK, records in: 6049, records dropped: 518 output_compression: NoCompression
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.422739) EVENT_LOG_v1 {"time_micros": 1764728338422721, "job": 54, "event": "compaction_finished", "compaction_time_micros": 57563, "compaction_time_cpu_micros": 35465, "output_level": 6, "num_output_files": 1, "total_output_size": 7196774, "num_input_records": 6049, "num_output_records": 5531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338424167, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728338425895, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.361186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:18:58.426106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:18:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:18:58 compute-0 nova_compute[351485]: 2025-12-03 02:18:58.690 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:18:58 compute-0 nova_compute[351485]: 2025-12-03 02:18:58.939 351492 DEBUG nova.network.neutron [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:58 compute-0 nova_compute[351485]: 2025-12-03 02:18:58.959 351492 INFO nova.compute.manager [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Took 1.32 seconds to deallocate network for instance.#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.016 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.017 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 199 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 4.9 MiB/s wr, 131 op/s
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.112 351492 DEBUG oslo_concurrency.processutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.381 351492 DEBUG nova.network.neutron [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.407 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Releasing lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.408 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance network_info: |[{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.410 351492 DEBUG oslo_concurrency.lockutils [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.411 351492 DEBUG nova.network.neutron [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.413 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start _get_guest_xml network_info=[{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.418 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.419 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.420 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.421 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.421 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.421 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.422 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.423 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.424 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.424 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.424 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.425 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-unplugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.426 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG oslo_concurrency.lockutils [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.427 351492 DEBUG nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] No waiting events found dispatching network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.428 351492 WARNING nova.compute.manager [req-a527b552-5fe0-4306-a18a-15efccdbec89 req-e19e9d17-4bf6-42e7-b5ad-3fd5b18ab9ae 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received unexpected event network-vif-plugged-ae5db7e6-7a7a-4116-954a-be851ee02864 for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.438 351492 WARNING nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.444 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.445 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.457 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.457 351492 DEBUG nova.virt.libvirt.host [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.458 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.458 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:14:44Z,direct_url=<?>,disk_format='qcow2',id=ef773cba-72f0-486f-b5e5-792ff26bb688,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9746b242761a48048d185ce26d622b33',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:14:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.459 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.460 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.461 351492 DEBUG nova.virt.hardware [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.465 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:18:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:18:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083283326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.626 351492 DEBUG oslo_concurrency.processutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.640 351492 DEBUG nova.compute.provider_tree [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:18:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:59.650 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:18:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:59.654 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:18:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:18:59.655 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.683 351492 DEBUG nova.scheduler.client.report [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.702 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.731 351492 INFO nova.scheduler.client.report [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Deleted allocations for instance 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592#033[00m
Dec  3 02:18:59 compute-0 podman[158098]: time="2025-12-03T02:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:18:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8194 "" "Go-http-client/1.1"
Dec  3 02:18:59 compute-0 nova_compute[351485]: 2025-12-03 02:18:59.816 351492 DEBUG oslo_concurrency.lockutils [None req-1a1bc7b1-2a09-4d63-88b6-eeeecfcf86be abdbefadac2a4d98bd33ed8a1a60ff75 f8f8e5d142604e8c8aabf1e14a1467ca - - default default] Lock "8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:18:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:18:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244570002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.024 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.077 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.088 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:19:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931761169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.620 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.623 351492 DEBUG nova.virt.libvirt.vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:18:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1226962462',display_name='tempest-TestServerBasicOps-server-1226962462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1226962462',id=13,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOrfBag91AFIZ3cgT/3v6DEUVxmWorZPsTvJBCT3v1fcFACxQDoahVOND6soOw4PzOfL8jvcBATzzdMnLLkWJn8sw8+PBGsPmPnV6EhNG8NjAI9UA8OPVUdoPITGd7W+8A==',key_name='tempest-TestServerBasicOps-954582748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38f1a4b24bc74f43a70b0fc06f48b9a2',ramdisk_id='',reservation_id='r-qt8l6h9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1222487710',owner_user_name='tempest-TestServerBasicOps-1222487710-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:18:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2de48f7608ea45c8ac558125d72373c4',uuid=48201127-9aa0-4cde-a41d-6790411480a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.624 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converting VIF {"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.625 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.628 351492 DEBUG nova.objects.instance [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 48201127-9aa0-4cde-a41d-6790411480a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.661 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <uuid>48201127-9aa0-4cde-a41d-6790411480a4</uuid>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <name>instance-0000000d</name>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <nova:name>tempest-TestServerBasicOps-server-1226962462</nova:name>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:18:59</nova:creationTime>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:user uuid="2de48f7608ea45c8ac558125d72373c4">tempest-TestServerBasicOps-1222487710-project-member</nova:user>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:project uuid="38f1a4b24bc74f43a70b0fc06f48b9a2">tempest-TestServerBasicOps-1222487710</nova:project>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="ef773cba-72f0-486f-b5e5-792ff26bb688"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <nova:port uuid="0d927baf-41d2-458f-b4c0-1218ba0eec13">
Dec  3 02:19:00 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <system>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <entry name="serial">48201127-9aa0-4cde-a41d-6790411480a4</entry>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <entry name="uuid">48201127-9aa0-4cde-a41d-6790411480a4</entry>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </system>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <os>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  </os>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <features>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  </features>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/48201127-9aa0-4cde-a41d-6790411480a4_disk">
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      </source>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/48201127-9aa0-4cde-a41d-6790411480a4_disk.config">
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      </source>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:19:00 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:55:61:16"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <target dev="tap0d927baf-41"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/console.log" append="off"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <video>
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </video>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:19:00 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:19:00 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:19:00 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:19:00 compute-0 nova_compute[351485]: </domain>
Dec  3 02:19:00 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.662 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Preparing to wait for external event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.663 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.663 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.664 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.665 351492 DEBUG nova.virt.libvirt.vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:18:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1226962462',display_name='tempest-TestServerBasicOps-server-1226962462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1226962462',id=13,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOrfBag91AFIZ3cgT/3v6DEUVxmWorZPsTvJBCT3v1fcFACxQDoahVOND6soOw4PzOfL8jvcBATzzdMnLLkWJn8sw8+PBGsPmPnV6EhNG8NjAI9UA8OPVUdoPITGd7W+8A==',key_name='tempest-TestServerBasicOps-954582748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38f1a4b24bc74f43a70b0fc06f48b9a2',ramdisk_id='',reservation_id='r-qt8l6h9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1222487710',owner_user_name='tempest-TestServerBasicOps-1222487710-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:18:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2de48f7608ea45c8ac558125d72373c4',uuid=48201127-9aa0-4cde-a41d-6790411480a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.665 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converting VIF {"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.666 351492 DEBUG nova.network.os_vif_util [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.667 351492 DEBUG os_vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.668 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.669 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.669 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.675 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.675 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d927baf-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.676 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d927baf-41, col_values=(('external_ids', {'iface-id': '0d927baf-41d2-458f-b4c0-1218ba0eec13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:61:16', 'vm-uuid': '48201127-9aa0-4cde-a41d-6790411480a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.679 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:00 compute-0 NetworkManager[48912]: <info>  [1764728340.6820] manager: (tap0d927baf-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.683 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.693 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.694 351492 INFO os_vif [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41')#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.782 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.783 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.783 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] No VIF found with MAC fa:16:3e:55:61:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.783 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Using config drive#033[00m
Dec  3 02:19:00 compute-0 nova_compute[351485]: 2025-12-03 02:19:00.819 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 150 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 5.2 MiB/s wr, 163 op/s
Dec  3 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.305 351492 DEBUG nova.compute.manager [req-b4673748-32ec-4525-90e4-65789f68cb0f req-77a7407d-9013-46af-bb9d-5fb4c6477ed6 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Received event network-vif-deleted-ae5db7e6-7a7a-4116-954a-be851ee02864 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:19:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:19:01 compute-0 openstack_network_exporter[368278]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:19:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.509 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Creating config drive at /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config#033[00m
Dec  3 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.516 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcazr1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.665 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zcazr1d" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.724 351492 DEBUG nova.storage.rbd_utils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] rbd image 48201127-9aa0-4cde-a41d-6790411480a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:01 compute-0 nova_compute[351485]: 2025-12-03 02:19:01.737 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config 48201127-9aa0-4cde-a41d-6790411480a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.065 351492 DEBUG oslo_concurrency.processutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config 48201127-9aa0-4cde-a41d-6790411480a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.066 351492 INFO nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deleting local config drive /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4/disk.config because it was imported into RBD.#033[00m
Dec  3 02:19:02 compute-0 kernel: tap0d927baf-41: entered promiscuous mode
Dec  3 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.1810] manager: (tap0d927baf-41): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Dec  3 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00170|binding|INFO|Claiming lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 for this chassis.
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.182 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00171|binding|INFO|0d927baf-41d2-458f-b4c0-1218ba0eec13: Claiming fa:16:3e:55:61:16 10.100.0.9
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.195 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:61:16 10.100.0.9'], port_security=['fa:16:3e:55:61:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '48201127-9aa0-4cde-a41d-6790411480a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b46a3397-654d-4ceb-be75-a322ea7e5091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ad947c5-c226-4f50-af5d-711cff08343d b2c98479-d787-4d5e-b71b-1dd64682dc39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2444ad0-b9d4-4c2c-9115-6ef22db7fd9a, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=0d927baf-41d2-458f-b4c0-1218ba0eec13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.197 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 0d927baf-41d2-458f-b4c0-1218ba0eec13 in datapath b46a3397-654d-4ceb-be75-a322ea7e5091 bound to our chassis#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.202 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b46a3397-654d-4ceb-be75-a322ea7e5091#033[00m
Dec  3 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00172|binding|INFO|Setting lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 ovn-installed in OVS
Dec  3 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00173|binding|INFO|Setting lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 up in Southbound
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.220 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.221 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d2609792-7a37-4e92-9ebf-0a2e6806c61e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.222 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb46a3397-61 in ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.228 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb46a3397-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.229 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a26db8b7-fdbb-4e1e-a533-7cc077b73f88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.230 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.230 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[406f72ff-a3e9-4993-9ea7-3b76e137630b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 systemd-udevd[452341]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.254 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[db02e0be-6683-437d-ae59-2b1ab9a402f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 systemd-machined[138558]: New machine qemu-14-instance-0000000d.
Dec  3 02:19:02 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec  3 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.2802] device (tap0d927baf-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.2860] device (tap0d927baf-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.285 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a14e12-08a7-4220-9ca1-b233ba052055]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.330 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[201ba08e-9948-4daf-8f98-e0b07823eb82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.3399] manager: (tapb46a3397-60): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Dec  3 02:19:02 compute-0 systemd-udevd[452344]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.339 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[a83c9d42-1fc8-4fae-bbe8-ec61f37f585f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.387 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[a84bb635-2c61-4331-9546-95ab08866aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.392 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[58607d3c-f5f1-477d-8478-61208b210359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.4276] device (tapb46a3397-60): carrier: link connected
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.438 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[a43fdf3e-8cf1-4ce4-b443-119d2469ea39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.468 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[944c0c59-b480-4861-9657-7d9b3be3e83e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb46a3397-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:fe:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718190, 'reachable_time': 22237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452372, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.498 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[75678aea-4b70-40e2-9d2b-a27b1a8f3b37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:fe57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718190, 'tstamp': 718190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452373, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.531 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[5554972d-600b-4cf7-bf18-cb52dfcb858b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb46a3397-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:fe:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718190, 'reachable_time': 22237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 452374, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.588 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[49f763b2-16e7-4da1-b2e9-74f3e9063e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.640 351492 DEBUG nova.network.neutron [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updated VIF entry in instance network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.641 351492 DEBUG nova.network.neutron [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.661 351492 DEBUG oslo_concurrency.lockutils [req-388997a5-97cc-4676-9128-8f9a68cdc340 req-f6e6f823-cde6-42d7-afa3-d049abe74a7e 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.703 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ae20217e-dd25-4204-9a6e-93f481bb0dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.705 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb46a3397-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.705 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.706 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb46a3397-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:02 compute-0 kernel: tapb46a3397-60: entered promiscuous mode
Dec  3 02:19:02 compute-0 NetworkManager[48912]: <info>  [1764728342.7113] manager: (tapb46a3397-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.717 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb46a3397-60, col_values=(('external_ids', {'iface-id': 'b45ed026-f02f-47d3-980a-9a8302853040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.719 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:02 compute-0 ovn_controller[89134]: 2025-12-03T02:19:02Z|00174|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.730 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.731 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b46a3397-654d-4ceb-be75-a322ea7e5091.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b46a3397-654d-4ceb-be75-a322ea7e5091.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.737 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d9046675-17d6-4161-a7a2-6844456e70ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.738 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-b46a3397-654d-4ceb-be75-a322ea7e5091
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/b46a3397-654d-4ceb-be75-a322ea7e5091.pid.haproxy
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID b46a3397-654d-4ceb-be75-a322ea7e5091
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:19:02 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:02.739 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'env', 'PROCESS_TAG=haproxy-b46a3397-654d-4ceb-be75-a322ea7e5091', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b46a3397-654d-4ceb-be75-a322ea7e5091.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.765 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.992 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:02 compute-0 nova_compute[351485]: 2025-12-03 02:19:02.994 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.017 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:19:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 4.3 MiB/s wr, 146 op/s
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.122 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.123 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.139 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.140 351492 INFO nova.compute.claims [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.175 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728343.1748834, 48201127-9aa0-4cde-a41d-6790411480a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.176 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Started (Lifecycle Event)#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.204 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.213 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728343.1750774, 48201127-9aa0-4cde-a41d-6790411480a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.213 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.227 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.233 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.252 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.266 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.313168101 +0000 UTC m=+0.086348764 container create 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  3 02:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.271296966 +0000 UTC m=+0.044477639 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:19:03 compute-0 systemd[1]: Started libpod-conmon-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb.scope.
Dec  3 02:19:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:19:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3ca008127e0843a16153cba25a8cdfe9386b435396ea086db82b591e22278b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.485084065 +0000 UTC m=+0.258264798 container init 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  3 02:19:03 compute-0 podman[452447]: 2025-12-03 02:19:03.506673096 +0000 UTC m=+0.279853799 container start 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.548 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.550 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.551 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:03 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : New worker (452488) forked
Dec  3 02:19:03 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : Loading success.
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.553 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.554 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Processing event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.555 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.556 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.557 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.558 351492 DEBUG oslo_concurrency.lockutils [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.559 351492 DEBUG nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] No waiting events found dispatching network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.560 351492 WARNING nova.compute.manager [req-de94373f-3cc9-4602-975e-f7ddb4aa3d1b req-b9ab7324-2fe9-4bf0-abb1-2aa261358061 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received unexpected event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 for instance with vm_state building and task_state spawning.#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.562 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.573 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728343.5719786, 48201127-9aa0-4cde-a41d-6790411480a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.574 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.578 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.587 351492 INFO nova.virt.libvirt.driver [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance spawned successfully.#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.588 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.597 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.607 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.625 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.626 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.626 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.627 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.628 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.628 351492 DEBUG nova.virt.libvirt.driver [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.637 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.693 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.703 351492 INFO nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 9.93 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.703 351492 DEBUG nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.775 351492 INFO nova.compute.manager [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 11.11 seconds to build instance.#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.796 351492 DEBUG oslo_concurrency.lockutils [None req-5000d13b-4018-4675-a259-d339e7106a47 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:19:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259948269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.845 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.854 351492 DEBUG nova.compute.provider_tree [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.871 351492 DEBUG nova.scheduler.client.report [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.897 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.897 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.954 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.955 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:19:03 compute-0 nova_compute[351485]: 2025-12-03 02:19:03.979 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.001 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.093 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.095 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.095 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Creating image(s)#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.137 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.177 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.232 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.241 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.242 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:04 compute-0 nova_compute[351485]: 2025-12-03 02:19:04.270 351492 DEBUG nova.policy [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63f39ac2863946b8b817457e689ff933', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:19:05 compute-0 ovn_controller[89134]: 2025-12-03T02:19:05Z|00175|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 3.4 MiB/s wr, 118 op/s
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.107 351492 DEBUG nova.virt.libvirt.imagebackend [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image locations are: [{'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/8876482c-db67-48c0-9203-60685152fc9d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://3765feb2-36f8-5b86-b74c-64e9221f9c4c/images/8876482c-db67-48c0-9203-60685152fc9d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 02:19:05 compute-0 ovn_controller[89134]: 2025-12-03T02:19:05Z|00176|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.347 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.679 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.748 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728330.7424986, 1b83725c-0af2-491f-98d9-bdb0ed1a5979 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.748 351492 INFO nova.compute.manager [-] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.771 351492 DEBUG nova.compute.manager [None req-8f9b20d3-6de9-4f77-8230-439e09794c86 - - - - - -] [instance: 1b83725c-0af2-491f-98d9-bdb0ed1a5979] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:05 compute-0 nova_compute[351485]: 2025-12-03 02:19:05.867 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Successfully created port: f36a9f58-d7c9-4f05-942d-5a2c4cce705a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.728 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.828 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.829 351492 DEBUG nova.virt.images [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] 8876482c-db67-48c0-9203-60685152fc9d was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  3 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.830 351492 DEBUG nova.privsep.utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 02:19:06 compute-0 nova_compute[351485]: 2025-12-03 02:19:06.831 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.084 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.part /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.093 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 124 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 915 KiB/s wr, 132 op/s
Dec  3 02:19:07 compute-0 NetworkManager[48912]: <info>  [1764728347.1573] manager: (patch-provnet-80f94762-882c-4d34-b4ad-5139365af23d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.156 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:07 compute-0 NetworkManager[48912]: <info>  [1764728347.1590] manager: (patch-br-int-to-provnet-80f94762-882c-4d34-b4ad-5139365af23d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.186 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.187 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.241 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.253 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.398 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:07 compute-0 ovn_controller[89134]: 2025-12-03T02:19:07Z|00177|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.438 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.539 351492 DEBUG nova.compute.manager [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.539 351492 DEBUG nova.compute.manager [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing instance network info cache due to event network-changed-0d927baf-41d2-458f-b4c0-1218ba0eec13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.540 351492 DEBUG oslo_concurrency.lockutils [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.542 351492 DEBUG oslo_concurrency.lockutils [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.544 351492 DEBUG nova.network.neutron [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Refreshing network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.674 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Successfully updated port: f36a9f58-d7c9-4f05-942d-5a2c4cce705a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.692 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.692 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.692 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.709 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.830 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] resizing rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.892 351492 DEBUG nova.compute.manager [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-changed-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.893 351492 DEBUG nova.compute.manager [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Refreshing instance network info cache due to event network-changed-f36a9f58-d7c9-4f05-942d-5a2c4cce705a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:19:07 compute-0 nova_compute[351485]: 2025-12-03 02:19:07.895 351492 DEBUG oslo_concurrency.lockutils [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.050 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.070 351492 DEBUG nova.objects.instance [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'migration_context' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.086 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.087 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Ensure instance console log exists: /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.088 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.088 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.089 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  3 02:19:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  3 02:19:08 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  3 02:19:08 compute-0 nova_compute[351485]: 2025-12-03 02:19:08.695 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 124 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 837 KiB/s rd, 288 KiB/s wr, 106 op/s
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.405 351492 DEBUG nova.network.neutron [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.430 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.430 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance network_info: |[{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.430 351492 DEBUG oslo_concurrency.lockutils [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.431 351492 DEBUG nova.network.neutron [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Refreshing network info cache for port f36a9f58-d7c9-4f05-942d-5a2c4cce705a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.434 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start _get_guest_xml network_info=[{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8876482c-db67-48c0-9203-60685152fc9d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.444 351492 WARNING nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.468 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.471 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.482 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.484 351492 DEBUG nova.virt.libvirt.host [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.485 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.486 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.489 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.490 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.491 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.492 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.493 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.496 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.498 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.499 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.500 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.501 351492 DEBUG nova.virt.hardware [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.519 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.594 351492 DEBUG nova.network.neutron [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updated VIF entry in instance network info cache for port 0d927baf-41d2-458f-b4c0-1218ba0eec13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.596 351492 DEBUG nova.network.neutron [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [{"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.634 351492 DEBUG oslo_concurrency.lockutils [req-f46862cf-47eb-4a29-bf2d-786f066c91ff req-197b6ad4-9494-47fd-a9f3-65b8595a0d03 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-48201127-9aa0-4cde-a41d-6790411480a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:19:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:19:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008078241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:19:09 compute-0 nova_compute[351485]: 2025-12-03 02:19:09.984 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.036 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.045 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:19:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857908507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.541 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.546 351492 DEBUG nova.virt.libvirt.vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',id=14,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-czfymphz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:19:04Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=2890ee5c-21c1-4e9d-9421-1a2df0f67f76,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.548 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.549 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.551 351492 DEBUG nova.objects.instance [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.594 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <uuid>2890ee5c-21c1-4e9d-9421-1a2df0f67f76</uuid>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <name>instance-0000000e</name>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <nova:name>te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr</nova:name>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:19:09</nova:creationTime>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:user uuid="8f61f44789494541b7c101b0fdab52f0">tempest-PrometheusGabbiTest-1008659157-project-member</nova:user>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:project uuid="63f39ac2863946b8b817457e689ff933">tempest-PrometheusGabbiTest-1008659157</nova:project>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="8876482c-db67-48c0-9203-60685152fc9d"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <nova:port uuid="f36a9f58-d7c9-4f05-942d-5a2c4cce705a">
Dec  3 02:19:10 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.0.239" ipVersion="4"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <system>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <entry name="serial">2890ee5c-21c1-4e9d-9421-1a2df0f67f76</entry>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <entry name="uuid">2890ee5c-21c1-4e9d-9421-1a2df0f67f76</entry>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </system>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <os>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  </os>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <features>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  </features>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk">
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      </source>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config">
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      </source>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:19:10 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:dd:ed:eb"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <target dev="tapf36a9f58-d7"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/console.log" append="off"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <video>
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </video>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:19:10 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:19:10 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:19:10 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:19:10 compute-0 nova_compute[351485]: </domain>
Dec  3 02:19:10 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.597 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Preparing to wait for external event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.598 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.599 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.599 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.601 351492 DEBUG nova.virt.libvirt.vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',id=14,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-czfymphz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:19:04Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=2890ee5c-21c1-4e9d-9421-1a2df0f67f76,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.602 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.603 351492 DEBUG nova.network.os_vif_util [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.604 351492 DEBUG os_vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.605 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.607 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.608 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.615 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf36a9f58-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.616 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf36a9f58-d7, col_values=(('external_ids', {'iface-id': 'f36a9f58-d7c9-4f05-942d-5a2c4cce705a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:ed:eb', 'vm-uuid': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:10 compute-0 NetworkManager[48912]: <info>  [1764728350.6203] manager: (tapf36a9f58-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.619 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.633 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.635 351492 INFO os_vif [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7')#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.715 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.716 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.716 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No VIF found with MAC fa:16:3e:dd:ed:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.717 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Using config drive#033[00m
Dec  3 02:19:10 compute-0 nova_compute[351485]: 2025-12-03 02:19:10.781 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 151 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 135 op/s
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.650 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728336.6488826, 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.651 351492 INFO nova.compute.manager [-] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.681 351492 DEBUG nova.compute.manager [None req-79d477dc-078f-48b1-b44e-3204d13626d6 - - - - - -] [instance: 8a9f8b77-8d9c-4c1f-9bf6-4d5e6226f592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.722 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Creating config drive at /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config#033[00m
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.736 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgezeca7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.896 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgezeca7b" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.962 351492 DEBUG nova.storage.rbd_utils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:19:11 compute-0 nova_compute[351485]: 2025-12-03 02:19:11.974 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.294 351492 DEBUG oslo_concurrency.processutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config 2890ee5c-21c1-4e9d-9421-1a2df0f67f76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.296 351492 INFO nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deleting local config drive /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.config because it was imported into RBD.#033[00m
Dec  3 02:19:12 compute-0 kernel: tapf36a9f58-d7: entered promiscuous mode
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.393 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.401 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00178|binding|INFO|Claiming lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a for this chassis.
Dec  3 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00179|binding|INFO|f36a9f58-d7c9-4f05-942d-5a2c4cce705a: Claiming fa:16:3e:dd:ed:eb 10.100.0.239
Dec  3 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.4081] manager: (tapf36a9f58-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.411 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.425 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ed:eb 10.100.0.239'], port_security=['fa:16:3e:dd:ed:eb 10.100.0.239'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.239/16', 'neutron:device_id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=f36a9f58-d7c9-4f05-942d-5a2c4cce705a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.428 288528 INFO neutron.agent.ovn.metadata.agent [-] Port f36a9f58-d7c9-4f05-942d-5a2c4cce705a in datapath a7615b73-b987-4b91-b12c-2d7488085657 bound to our chassis#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.431 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7615b73-b987-4b91-b12c-2d7488085657#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.452 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[c85551f3-6fdc-4b09-9adf-23e969867029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.453 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7615b73-b1 in ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.455 414755 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7615b73-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.455 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[44797c80-fbe7-4fc4-8e4c-f256b333f0fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.457 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[92303282-6b59-407f-9f34-18794c762635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00180|binding|INFO|Releasing lport b45ed026-f02f-47d3-980a-9a8302853040 from this chassis (sb_readonly=0)
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.473 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[4576038f-2623-4717-abcf-82a21c936621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00181|binding|INFO|Setting lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a ovn-installed in OVS
Dec  3 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00182|binding|INFO|Setting lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a up in Southbound
Dec  3 02:19:12 compute-0 systemd-machined[138558]: New machine qemu-15-instance-0000000e.
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.481 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:12 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.491 351492 DEBUG nova.network.neutron [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated VIF entry in instance network info cache for port f36a9f58-d7c9-4f05-942d-5a2c4cce705a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.492 351492 DEBUG nova.network.neutron [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:19:12 compute-0 systemd-udevd[452838]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.503 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[dadb1425-7e09-4317-8864-76adbfc43502]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.509 351492 DEBUG oslo_concurrency.lockutils [req-704d7f60-60c2-454f-943c-f9cd435b00f8 req-0ed0d989-db79-4d32-8ce0-55269b0d0721 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.5203] device (tapf36a9f58-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.5217] device (tapf36a9f58-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.546 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[553a6255-f8ef-4ff2-927c-e239d3d13727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.5634] manager: (tapa7615b73-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.563 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[792a15a1-d268-4048-b8a4-dbdf08b55ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:12 compute-0 podman[452809]: 2025-12-03 02:19:12.58544921 +0000 UTC m=+0.142081581 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:19:12 compute-0 podman[452811]: 2025-12-03 02:19:12.586281224 +0000 UTC m=+0.137216024 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.603 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[410cdfa3-0deb-4a5a-8c3b-bd85707eab68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.606 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[23c14c9b-534a-4a9f-ba10-22e4b89dcc4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 podman[452810]: 2025-12-03 02:19:12.612567657 +0000 UTC m=+0.172452880 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.6265] device (tapa7615b73-b0): carrier: link connected
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.632 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c52b6fc1-044b-4c21-ad87-ecca54a3abbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.649 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[49da2eb2-1910-429d-af65-4da3f04bb7c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 34894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452899, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.671 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1d03151c-b192-48a6-aacb-779141a3d0b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:3ef5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719210, 'tstamp': 719210}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452900, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.694 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbbfcff-7d69-460c-b334-e25624300383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 34894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 452901, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.713 351492 DEBUG nova.compute.manager [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.714 351492 DEBUG oslo_concurrency.lockutils [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.715 351492 DEBUG oslo_concurrency.lockutils [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.715 351492 DEBUG oslo_concurrency.lockutils [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.716 351492 DEBUG nova.compute.manager [req-683da914-89fa-40fa-ae44-e4b528b4be95 req-ee6cbe03-c94e-4b1f-973b-e4aa21d34bda 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Processing event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.743 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[84ba0024-eb72-4c4d-8aea-1bb849048796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.877 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2f54d88f-d309-4735-87b9-1e2ff1aefbcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.879 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.881 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.882 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7615b73-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:12 compute-0 kernel: tapa7615b73-b0: entered promiscuous mode
Dec  3 02:19:12 compute-0 NetworkManager[48912]: <info>  [1764728352.8866] manager: (tapa7615b73-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.885 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.892 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7615b73-b0, col_values=(('external_ids', {'iface-id': '50c454e1-4a4b-4aad-b47b-dafc7b079018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:19:12 compute-0 ovn_controller[89134]: 2025-12-03T02:19:12Z|00183|binding|INFO|Releasing lport 50c454e1-4a4b-4aad-b47b-dafc7b079018 from this chassis (sb_readonly=0)
Dec  3 02:19:12 compute-0 nova_compute[351485]: 2025-12-03 02:19:12.923 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.924 288528 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7615b73-b987-4b91-b12c-2d7488085657.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7615b73-b987-4b91-b12c-2d7488085657.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.926 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3db6b7-9e3d-4043-8d3e-363a24d92e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.927 288528 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: global
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    log         /dev/log local0 debug
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    log-tag     haproxy-metadata-proxy-a7615b73-b987-4b91-b12c-2d7488085657
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    user        root
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    group       root
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    maxconn     1024
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    pidfile     /var/lib/neutron/external/pids/a7615b73-b987-4b91-b12c-2d7488085657.pid.haproxy
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    daemon
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: defaults
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    log global
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    mode http
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    option httplog
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    option dontlognull
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    option http-server-close
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    option forwardfor
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    retries                 3
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    timeout http-request    30s
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    timeout connect         30s
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    timeout client          32s
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    timeout server          32s
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    timeout http-keep-alive 30s
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: listen listener
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    bind 169.254.169.254:80
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    server metadata /var/lib/neutron/metadata_proxy
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]:    http-request add-header X-OVN-Network-ID a7615b73-b987-4b91-b12c-2d7488085657
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  3 02:19:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:12.928 288528 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'env', 'PROCESS_TAG=haproxy-a7615b73-b987-4b91-b12c-2d7488085657', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7615b73-b987-4b91-b12c-2d7488085657.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  3 02:19:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.208 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728353.207108, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.208 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Started (Lifecycle Event)#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.212 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.219 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.227 351492 INFO nova.virt.libvirt.driver [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance spawned successfully.#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.228 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.243 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.258 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.272 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.273 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.275 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.280 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.287 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.287 351492 DEBUG nova.virt.libvirt.driver [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.292 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.293 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728353.207289, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.293 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:19:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.344 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.353 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728353.21828, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.354 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.380 351492 INFO nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 9.29 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.381 351492 DEBUG nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.385 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.407 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.455 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.483 351492 INFO nova.compute.manager [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 10.40 seconds to build instance.#033[00m
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.510 351492 DEBUG oslo_concurrency.lockutils [None req-b8f24b29-97fe-4847-9b14-6df33f452bbf 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.517475409 +0000 UTC m=+0.125054899 container create c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.456828613 +0000 UTC m=+0.064408183 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 02:19:13 compute-0 systemd[1]: Started libpod-conmon-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6.scope.
Dec  3 02:19:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88013123d4a753ad03452e7c5ee2f44c7a3cff6bfcbc4c86988a478219f1d093/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.664924151 +0000 UTC m=+0.272503671 container init c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:19:13 compute-0 podman[452973]: 2025-12-03 02:19:13.673831153 +0000 UTC m=+0.281410643 container start c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:19:13 compute-0 nova_compute[351485]: 2025-12-03 02:19:13.698 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:13 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : New worker (452995) forked
Dec  3 02:19:13 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : Loading success.
Dec  3 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.989 351492 DEBUG nova.compute.manager [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.989 351492 DEBUG oslo_concurrency.lockutils [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.990 351492 DEBUG oslo_concurrency.lockutils [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.990 351492 DEBUG oslo_concurrency.lockutils [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.991 351492 DEBUG nova.compute.manager [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] No waiting events found dispatching network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:19:14 compute-0 nova_compute[351485]: 2025-12-03 02:19:14.991 351492 WARNING nova.compute.manager [req-5e7b1aa2-80bc-49dc-9ddb-adfa81ba5e4a req-e2e7e060-1a5c-4cb2-b238-72f3e8e723a7 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received unexpected event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a for instance with vm_state active and task_state None.#033[00m
Dec  3 02:19:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec  3 02:19:15 compute-0 nova_compute[351485]: 2025-12-03 02:19:15.621 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec  3 02:19:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.615 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.616 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.618 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:18 compute-0 nova_compute[351485]: 2025-12-03 02:19:18.705 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:18 compute-0 podman[453005]: 2025-12-03 02:19:18.8901526 +0000 UTC m=+0.139066835 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  3 02:19:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:19:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131944634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.104 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 103 op/s
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.221 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.222 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.228 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.229 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.512 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.513 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.535 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 48201127-9aa0-4cde-a41d-6790411480a4 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 02:19:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:19.544 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/48201127-9aa0-4cde-a41d-6790411480a4 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.759 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.761 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3669MB free_disk=59.94643783569336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.761 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.762 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.889 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 48201127-9aa0-4cde-a41d-6790411480a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.890 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.891 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.892 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:19:19 compute-0 nova_compute[351485]: 2025-12-03 02:19:19.949 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:19:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:19:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307495154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.456 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.472 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.493 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.530 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.531 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:20 compute-0 nova_compute[351485]: 2025-12-03 02:19:20.625 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.923 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2084 Content-Type: application/json Date: Wed, 03 Dec 2025 02:19:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-89d201f8-3452-4af1-80a2-7836e7d8b368 x-openstack-request-id: req-89d201f8-3452-4af1-80a2-7836e7d8b368 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.923 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "48201127-9aa0-4cde-a41d-6790411480a4", "name": "tempest-TestServerBasicOps-server-1226962462", "status": "ACTIVE", "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "user_id": "2de48f7608ea45c8ac558125d72373c4", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "b7a9ecca22a84e47db0dcb720867459e13c9ede783cdac92160bd565", "image": {"id": "ef773cba-72f0-486f-b5e5-792ff26bb688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ef773cba-72f0-486f-b5e5-792ff26bb688"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:18:51Z", "updated": "2025-12-03T02:19:03Z", "addresses": {"tempest-TestServerBasicOps-1788173895-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:55:61:16"}, {"version": 4, "addr": "192.168.122.211", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:55:61:16"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/48201127-9aa0-4cde-a41d-6790411480a4"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/48201127-9aa0-4cde-a41d-6790411480a4"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-954582748", "OS-SRV-USG:launched_at": "2025-12-03T02:19:03.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1036119230"}, {"name": "tempest-secgroup-smoke-1084002553"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.923 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/48201127-9aa0-4cde-a41d-6790411480a4 used request id req-89d201f8-3452-4af1-80a2-7836e7d8b368 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.926 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '48201127-9aa0-4cde-a41d-6790411480a4', 'name': 'tempest-TestServerBasicOps-server-1226962462', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ef773cba-72f0-486f-b5e5-792ff26bb688'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'user_id': '2de48f7608ea45c8ac558125d72373c4', 'hostId': 'b7a9ecca22a84e47db0dcb720867459e13c9ede783cdac92160bd565', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.930 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 02:19:20 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:20.931 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 02:19:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.063 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Wed, 03 Dec 2025 02:19:20 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-95792df7-aa63-4950-bb2f-ba5778b76d04 x-openstack-request-id: req-95792df7-aa63-4950-bb2f-ba5778b76d04 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.064 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2890ee5c-21c1-4e9d-9421-1a2df0f67f76", "name": "te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr", "status": "ACTIVE", "tenant_id": "63f39ac2863946b8b817457e689ff933", "user_id": "8f61f44789494541b7c101b0fdab52f0", "metadata": {"metering.server_group": "38bfb145-4971-41b6-9bc3-faf3c3931019"}, "hostId": "b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2", "image": {"id": "8876482c-db67-48c0-9203-60685152fc9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8876482c-db67-48c0-9203-60685152fc9d"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:19:01Z", "updated": "2025-12-03T02:19:13Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.239", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:dd:ed:eb"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:19:13.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.064 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2890ee5c-21c1-4e9d-9421-1a2df0f67f76 used request id req-95792df7-aa63-4950-bb2f-ba5778b76d04 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.066 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.066 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:19:22.067449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.099 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.099 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 48201127-9aa0-4cde-a41d-6790411480a4: ceilometer.compute.pollsters.NoVolumeException
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76: ceilometer.compute.pollsters.NoVolumeException
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:19:22.146248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.150 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 48201127-9aa0-4cde-a41d-6790411480a4 / tap0d927baf-41 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.151 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.155 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 / tapf36a9f58-d7 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.155 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.157 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.157 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:19:22.156963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.159 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:19:22.159083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:19:22.160999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.162 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.163 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:19:22.162593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:19:22.164415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.182 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.182 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.201 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.201 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>]
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:19:22.202731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:19:22.204078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.259 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.260 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.316 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.316 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.319 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:19:22.318697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.319 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:19:22.321724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.322 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.latency volume: 2114496694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.322 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.latency volume: 2875731 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.323 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2182451717 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.323 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2630415 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.326 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.326 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.327 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.327 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:19:22.325505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.330 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:19:22.329430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.330 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:19:22.332186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.334 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:19:22.334337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.336 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.337 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.337 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:19:22.336421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.338 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.339 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.339 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.339 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:19:22.338608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.342 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:19:22.342106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.344 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/cpu volume: 17700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:19:22.344790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.345 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 8580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:19:22.346720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.348 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:19:22.348758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.349 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.350 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:19:22.350737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.351 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.351 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:19:22.353078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.353 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:19:22.354603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 14 DEBUG ceilometer.compute.pollsters [-] 48201127-9aa0-4cde-a41d-6790411480a4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:19:22.355951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.357 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1226962462>, <NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr>]
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:19:22.357468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:19:22.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:19:22 compute-0 podman[453072]: 2025-12-03 02:19:22.896920483 +0000 UTC m=+0.117191846 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Dec  3 02:19:22 compute-0 podman[453070]: 2025-12-03 02:19:22.901160183 +0000 UTC m=+0.140179817 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:19:22 compute-0 podman[453071]: 2025-12-03 02:19:22.916516038 +0000 UTC m=+0.152029913 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:19:22 compute-0 podman[453069]: 2025-12-03 02:19:22.925932794 +0000 UTC m=+0.174495338 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:19:22 compute-0 podman[453082]: 2025-12-03 02:19:22.926613793 +0000 UTC m=+0.134818395 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 02:19:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 755 KiB/s wr, 79 op/s
Dec  3 02:19:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.533 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.533 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.597 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.600 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:23 compute-0 nova_compute[351485]: 2025-12-03 02:19:23.706 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:24 compute-0 nova_compute[351485]: 2025-12-03 02:19:24.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec  3 02:19:25 compute-0 nova_compute[351485]: 2025-12-03 02:19:25.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:26 compute-0 nova_compute[351485]: 2025-12-03 02:19:26.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 65 op/s
Dec  3 02:19:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:19:28
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Dec  3 02:19:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:19:28 compute-0 nova_compute[351485]: 2025-12-03 02:19:28.708 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 op/s
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:19:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:19:29 compute-0 nova_compute[351485]: 2025-12-03 02:19:29.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:29 compute-0 podman[158098]: time="2025-12-03T02:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 02:19:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9129 "" "Go-http-client/1.1"
Dec  3 02:19:30 compute-0 nova_compute[351485]: 2025-12-03 02:19:30.636 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 51 op/s
Dec  3 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:19:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:19:31 compute-0 openstack_network_exporter[368278]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:19:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:19:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 567 KiB/s rd, 18 op/s
Dec  3 02:19:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:33 compute-0 nova_compute[351485]: 2025-12-03 02:19:33.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:34 compute-0 nova_compute[351485]: 2025-12-03 02:19:34.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:19:34 compute-0 nova_compute[351485]: 2025-12-03 02:19:34.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:19:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:19:35 compute-0 nova_compute[351485]: 2025-12-03 02:19:35.644 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:19:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:38 compute-0 nova_compute[351485]: 2025-12-03 02:19:38.712 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006975264740834798 of space, bias 1.0, pg target 0.20925794222504393 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:19:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:19:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 170 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1 op/s
Dec  3 02:19:40 compute-0 nova_compute[351485]: 2025-12-03 02:19:40.648 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 170 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 409 KiB/s wr, 8 op/s
Dec  3 02:19:41 compute-0 ovn_controller[89134]: 2025-12-03T02:19:41Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:61:16 10.100.0.9
Dec  3 02:19:41 compute-0 ovn_controller[89134]: 2025-12-03T02:19:41Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:61:16 10.100.0.9
Dec  3 02:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:19:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9581 writes, 36K keys, 9581 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9581 writes, 2507 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2123 writes, 7531 keys, 2123 commit groups, 1.0 writes per commit group, ingest: 7.42 MB, 0.01 MB/s#012Interval WAL: 2123 writes, 874 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:19:42 compute-0 podman[453170]: 2025-12-03 02:19:42.863868884 +0000 UTC m=+0.105173937 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:19:42 compute-0 podman[453169]: 2025-12-03 02:19:42.891859946 +0000 UTC m=+0.138214472 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  3 02:19:42 compute-0 podman[453171]: 2025-12-03 02:19:42.900908782 +0000 UTC m=+0.133167649 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:19:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 184 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 1.6 MiB/s wr, 38 op/s
Dec  3 02:19:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:43 compute-0 nova_compute[351485]: 2025-12-03 02:19:43.715 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 200 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec  3 02:19:45 compute-0 nova_compute[351485]: 2025-12-03 02:19:45.654 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328678732' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:19:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:19:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328678732' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:19:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  3 02:19:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:19:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 2973 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2088 writes, 7681 keys, 2088 commit groups, 1.0 writes per commit group, ingest: 7.32 MB, 0.01 MB/s#012Interval WAL: 2088 writes, 866 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:19:48 compute-0 nova_compute[351485]: 2025-12-03 02:19:48.718 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 208 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Dec  3 02:19:49 compute-0 podman[453226]: 2025-12-03 02:19:49.913011694 +0000 UTC m=+0.161749118 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:19:50 compute-0 nova_compute[351485]: 2025-12-03 02:19:50.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:50 compute-0 ovn_controller[89134]: 2025-12-03T02:19:50Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:ed:eb 10.100.0.239
Dec  3 02:19:50 compute-0 ovn_controller[89134]: 2025-12-03T02:19:50Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:ed:eb 10.100.0.239
Dec  3 02:19:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 224 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 486 KiB/s rd, 4.0 MiB/s wr, 87 op/s
Dec  3 02:19:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 224 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 568 KiB/s rd, 3.8 MiB/s wr, 98 op/s
Dec  3 02:19:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:53 compute-0 nova_compute[351485]: 2025-12-03 02:19:53.722 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:53 compute-0 podman[453247]: 2025-12-03 02:19:53.879398557 +0000 UTC m=+0.102148071 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:19:53 compute-0 podman[453248]: 2025-12-03 02:19:53.89507159 +0000 UTC m=+0.111811664 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler)
Dec  3 02:19:53 compute-0 podman[453246]: 2025-12-03 02:19:53.898544238 +0000 UTC m=+0.139081836 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Dec  3 02:19:53 compute-0 podman[453254]: 2025-12-03 02:19:53.902980134 +0000 UTC m=+0.120850810 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:19:53 compute-0 podman[453245]: 2025-12-03 02:19:53.91626948 +0000 UTC m=+0.163536668 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:19:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 235 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 84 op/s
Dec  3 02:19:55 compute-0 nova_compute[351485]: 2025-12-03 02:19:55.664 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:19:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 8914 writes, 35K keys, 8914 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8914 writes, 2261 syncs, 3.94 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1912 writes, 7094 keys, 1912 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 1912 writes, 777 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:19:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 02:19:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:19:58 compute-0 nova_compute[351485]: 2025-12-03 02:19:58.725 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 659c4e1d-aeea-49ed-88a9-3509e2ec2b39 does not exist
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6c7ad595-f5d8-4219-a11a-8768aa72e4c9 does not exist
Dec  3 02:19:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f6ad4e01-867b-4c76-9b43-607aada68802 does not exist
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:19:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:19:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:19:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  3 02:19:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:59.652 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:19:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:59.653 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:19:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:19:59.655 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:19:59 compute-0 podman[158098]: time="2025-12-03T02:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45044 "" "Go-http-client/1.1"
Dec  3 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:19:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:19:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9122 "" "Go-http-client/1.1"
Dec  3 02:19:59 compute-0 podman[453737]: 2025-12-03 02:19:59.930320814 +0000 UTC m=+0.072928324 container create 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:19:59 compute-0 systemd[1]: Started libpod-conmon-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope.
Dec  3 02:19:59 compute-0 podman[453737]: 2025-12-03 02:19:59.900798359 +0000 UTC m=+0.043405859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:20:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.052332276 +0000 UTC m=+0.194939786 container init 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.071018965 +0000 UTC m=+0.213626445 container start 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.075885142 +0000 UTC m=+0.218492702 container attach 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:20:00 compute-0 nostalgic_shtern[453753]: 167 167
Dec  3 02:20:00 compute-0 systemd[1]: libpod-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope: Deactivated successfully.
Dec  3 02:20:00 compute-0 conmon[453753]: conmon 58d896b86d57f696eb27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope/container/memory.events
Dec  3 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.090710302 +0000 UTC m=+0.233317832 container died 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-42b4b2e25f2573798ee34d031944ffc35f9eda56ce371d603b07131dafcd6ce9-merged.mount: Deactivated successfully.
Dec  3 02:20:00 compute-0 podman[453737]: 2025-12-03 02:20:00.169497871 +0000 UTC m=+0.312105391 container remove 58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:20:00 compute-0 systemd[1]: libpod-conmon-58d896b86d57f696eb2748cd45c27ea0a8a6907fe61d8c8e14f851388811d009.scope: Deactivated successfully.
Dec  3 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.440061736 +0000 UTC m=+0.074977332 container create 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.412963779 +0000 UTC m=+0.047879455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:20:00 compute-0 systemd[1]: Started libpod-conmon-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope.
Dec  3 02:20:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.633853739 +0000 UTC m=+0.268769405 container init 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.650965303 +0000 UTC m=+0.285880929 container start 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:20:00 compute-0 podman[453775]: 2025-12-03 02:20:00.659260208 +0000 UTC m=+0.294175854 container attach 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 02:20:00 compute-0 nova_compute[351485]: 2025-12-03 02:20:00.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Dec  3 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:20:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:20:01 compute-0 openstack_network_exporter[368278]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:20:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:20:01 compute-0 blissful_thompson[453791]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:20:01 compute-0 blissful_thompson[453791]: --> relative data size: 1.0
Dec  3 02:20:01 compute-0 blissful_thompson[453791]: --> All data devices are unavailable
Dec  3 02:20:01 compute-0 systemd[1]: libpod-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope: Deactivated successfully.
Dec  3 02:20:01 compute-0 systemd[1]: libpod-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope: Consumed 1.254s CPU time.
Dec  3 02:20:01 compute-0 podman[453775]: 2025-12-03 02:20:01.983897266 +0000 UTC m=+1.618812892 container died 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-712d0207f2fb647226d1ead9ae750754259c8e92cf1fd4492ff3c92fd3132747-merged.mount: Deactivated successfully.
Dec  3 02:20:02 compute-0 podman[453775]: 2025-12-03 02:20:02.114916493 +0000 UTC m=+1.749832089 container remove 2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:20:02 compute-0 systemd[1]: libpod-conmon-2acee72155f45378c92e0bbfb289243ceca12fe29f4e489494dc0d860167deb0.scope: Deactivated successfully.
Dec  3 02:20:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 274 KiB/s wr, 36 op/s
Dec  3 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.253559708 +0000 UTC m=+0.095814321 container create 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.218587439 +0000 UTC m=+0.060842142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:20:03 compute-0 systemd[1]: Started libpod-conmon-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope.
Dec  3 02:20:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.390838712 +0000 UTC m=+0.233093415 container init 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.404674443 +0000 UTC m=+0.246929046 container start 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.408959264 +0000 UTC m=+0.251213927 container attach 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:20:03 compute-0 thirsty_boyd[453983]: 167 167
Dec  3 02:20:03 compute-0 systemd[1]: libpod-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope: Deactivated successfully.
Dec  3 02:20:03 compute-0 conmon[453983]: conmon 1e790ac29c0f961bda29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope/container/memory.events
Dec  3 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.417126085 +0000 UTC m=+0.259380708 container died 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b29fb5cb5e7b26022e193c1842f4c39aee2662250ba4a057685de95c70a388-merged.mount: Deactivated successfully.
Dec  3 02:20:03 compute-0 podman[453967]: 2025-12-03 02:20:03.470836655 +0000 UTC m=+0.313091278 container remove 1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:20:03 compute-0 systemd[1]: libpod-conmon-1e790ac29c0f961bda29af030e10fce5d48508c65032362682360e4e95ad587d.scope: Deactivated successfully.
Dec  3 02:20:03 compute-0 nova_compute[351485]: 2025-12-03 02:20:03.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.786948109 +0000 UTC m=+0.105607139 container create ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.748841061 +0000 UTC m=+0.067500121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:20:03 compute-0 systemd[1]: Started libpod-conmon-ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e.scope.
Dec  3 02:20:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.964668517 +0000 UTC m=+0.283327647 container init ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.975610017 +0000 UTC m=+0.294269047 container start ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:20:03 compute-0 podman[454006]: 2025-12-03 02:20:03.981080311 +0000 UTC m=+0.299739441 container attach ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  3 02:20:04 compute-0 infallible_curran[454020]: {
Dec  3 02:20:04 compute-0 infallible_curran[454020]:    "0": [
Dec  3 02:20:04 compute-0 infallible_curran[454020]:        {
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "devices": [
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "/dev/loop3"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            ],
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_name": "ceph_lv0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_size": "21470642176",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "name": "ceph_lv0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "tags": {
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cluster_name": "ceph",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.crush_device_class": "",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.encrypted": "0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osd_id": "0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.type": "block",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.vdo": "0"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            },
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "type": "block",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "vg_name": "ceph_vg0"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:        }
Dec  3 02:20:04 compute-0 infallible_curran[454020]:    ],
Dec  3 02:20:04 compute-0 infallible_curran[454020]:    "1": [
Dec  3 02:20:04 compute-0 infallible_curran[454020]:        {
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "devices": [
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "/dev/loop4"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            ],
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_name": "ceph_lv1",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_size": "21470642176",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "name": "ceph_lv1",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "tags": {
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cluster_name": "ceph",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.crush_device_class": "",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.encrypted": "0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osd_id": "1",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.type": "block",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.vdo": "0"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            },
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "type": "block",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "vg_name": "ceph_vg1"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:        }
Dec  3 02:20:04 compute-0 infallible_curran[454020]:    ],
Dec  3 02:20:04 compute-0 infallible_curran[454020]:    "2": [
Dec  3 02:20:04 compute-0 infallible_curran[454020]:        {
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "devices": [
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "/dev/loop5"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            ],
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_name": "ceph_lv2",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_size": "21470642176",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "name": "ceph_lv2",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "tags": {
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.cluster_name": "ceph",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.crush_device_class": "",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.encrypted": "0",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osd_id": "2",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.type": "block",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:                "ceph.vdo": "0"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            },
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "type": "block",
Dec  3 02:20:04 compute-0 infallible_curran[454020]:            "vg_name": "ceph_vg2"
Dec  3 02:20:04 compute-0 infallible_curran[454020]:        }
Dec  3 02:20:04 compute-0 infallible_curran[454020]:    ]
Dec  3 02:20:04 compute-0 infallible_curran[454020]: }
Dec  3 02:20:04 compute-0 systemd[1]: libpod-ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e.scope: Deactivated successfully.
Dec  3 02:20:04 compute-0 podman[454006]: 2025-12-03 02:20:04.808021358 +0000 UTC m=+1.126680388 container died ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7e81615bded6bf61706c0a8c33200c44ef1c5922cb75e0b04e17fb22a33512-merged.mount: Deactivated successfully.
Dec  3 02:20:04 compute-0 podman[454006]: 2025-12-03 02:20:04.902168642 +0000 UTC m=+1.220827702 container remove ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_curran, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 02:20:04 compute-0 systemd[1]: libpod-conmon-ba11354671995eb180ae8db3a6fd9ac411ae05c9b8bb57c8b3d2ddfa0e98481e.scope: Deactivated successfully.
Dec  3 02:20:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 96 KiB/s wr, 19 op/s
Dec  3 02:20:05 compute-0 ovn_controller[89134]: 2025-12-03T02:20:05Z|00184|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  3 02:20:05 compute-0 nova_compute[351485]: 2025-12-03 02:20:05.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.001814714 +0000 UTC m=+0.083006979 container create 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:05.969127369 +0000 UTC m=+0.050319744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:20:06 compute-0 systemd[1]: Started libpod-conmon-92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639.scope.
Dec  3 02:20:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.127172401 +0000 UTC m=+0.208364786 container init 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.145845569 +0000 UTC m=+0.227037814 container start 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.151106608 +0000 UTC m=+0.232298953 container attach 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:20:06 compute-0 silly_satoshi[454196]: 167 167
Dec  3 02:20:06 compute-0 systemd[1]: libpod-92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639.scope: Deactivated successfully.
Dec  3 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.168007576 +0000 UTC m=+0.249199831 container died 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-71ffda040e6946eb7611142ba5e9d2444253d455603a09eb18cdbda7c17bc62c-merged.mount: Deactivated successfully.
Dec  3 02:20:06 compute-0 podman[454181]: 2025-12-03 02:20:06.224102743 +0000 UTC m=+0.305294998 container remove 92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:20:06 compute-0 systemd[1]: libpod-conmon-92f399ac6685b4295b7800131b089978f9cef0c09dea2f0a33afd8a338d3b639.scope: Deactivated successfully.
Dec  3 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.46714414 +0000 UTC m=+0.069633841 container create 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.43393274 +0000 UTC m=+0.036422481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:20:06 compute-0 systemd[1]: Started libpod-conmon-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope.
Dec  3 02:20:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.616385882 +0000 UTC m=+0.218875673 container init 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.640779732 +0000 UTC m=+0.243269463 container start 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:20:06 compute-0 podman[454220]: 2025-12-03 02:20:06.654504251 +0000 UTC m=+0.256994052 container attach 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:20:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 73 KiB/s wr, 2 op/s
Dec  3 02:20:07 compute-0 sad_johnson[454237]: {
Dec  3 02:20:07 compute-0 sad_johnson[454237]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "osd_id": 2,
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "type": "bluestore"
Dec  3 02:20:07 compute-0 sad_johnson[454237]:    },
Dec  3 02:20:07 compute-0 sad_johnson[454237]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "osd_id": 1,
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "type": "bluestore"
Dec  3 02:20:07 compute-0 sad_johnson[454237]:    },
Dec  3 02:20:07 compute-0 sad_johnson[454237]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "osd_id": 0,
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:20:07 compute-0 sad_johnson[454237]:        "type": "bluestore"
Dec  3 02:20:07 compute-0 sad_johnson[454237]:    }
Dec  3 02:20:07 compute-0 sad_johnson[454237]: }
Dec  3 02:20:07 compute-0 podman[454220]: 2025-12-03 02:20:07.838877919 +0000 UTC m=+1.441367620 container died 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:20:07 compute-0 systemd[1]: libpod-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope: Deactivated successfully.
Dec  3 02:20:07 compute-0 systemd[1]: libpod-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope: Consumed 1.200s CPU time.
Dec  3 02:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd7a45afd4609d4557d4e7dd92011004eabb5af9686d31c62ce8dea877b07ce-merged.mount: Deactivated successfully.
Dec  3 02:20:07 compute-0 podman[454220]: 2025-12-03 02:20:07.938297362 +0000 UTC m=+1.540787053 container remove 2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_johnson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:20:07 compute-0 systemd[1]: libpod-conmon-2784886ffecb12ed1bd8897c48d8952c413435944ba83faa73b956c612f54ece.scope: Deactivated successfully.
Dec  3 02:20:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:20:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:20:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:20:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:20:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 64b3947e-8e14-4330-ab68-57dbee2f0abc does not exist
Dec  3 02:20:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 648fa0db-cadd-46f9-a277-3c56bc509ad7 does not exist
Dec  3 02:20:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:20:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:20:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:08 compute-0 nova_compute[351485]: 2025-12-03 02:20:08.732 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 7.4 KiB/s wr, 0 op/s
Dec  3 02:20:10 compute-0 nova_compute[351485]: 2025-12-03 02:20:10.676 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.5 KiB/s wr, 0 op/s
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:12.183 288634 DEBUG eventlet.wsgi.server [-] (288634) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:12.184 288634 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: Accept: */*#015
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: Connection: close#015
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: Content-Type: text/plain#015
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: Host: 169.254.169.254#015
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: User-Agent: curl/7.84.0#015
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: X-Forwarded-For: 10.100.0.9#015
Dec  3 02:20:12 compute-0 ovn_metadata_agent[288523]: X-Ovn-Network-Id: b46a3397-654d-4ceb-be75-a322ea7e5091 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  3 02:20:12 compute-0 nova_compute[351485]: 2025-12-03 02:20:12.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:20:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:13 compute-0 nova_compute[351485]: 2025-12-03 02:20:13.735 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.736 288634 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  3 02:20:13 compute-0 haproxy-metadata-proxy-b46a3397-654d-4ceb-be75-a322ea7e5091[452488]: 10.100.0.9:57126 [03/Dec/2025:02:20:12.180] listener listener/metadata 0/0/0/1556/1556 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.737 288634 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.5523539#033[00m
Dec  3 02:20:13 compute-0 podman[454333]: 2025-12-03 02:20:13.871760837 +0000 UTC m=+0.092715734 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:20:13 compute-0 podman[454332]: 2025-12-03 02:20:13.879157417 +0000 UTC m=+0.111548878 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 02:20:13 compute-0 podman[454331]: 2025-12-03 02:20:13.88071063 +0000 UTC m=+0.106924266 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.889 288634 DEBUG eventlet.wsgi.server [-] (288634) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:13.890 288634 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: Accept: */*#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: Connection: close#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: Content-Length: 100#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: Content-Type: application/x-www-form-urlencoded#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: Host: 169.254.169.254#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: User-Agent: curl/7.84.0#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: X-Forwarded-For: 10.100.0.9#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: X-Ovn-Network-Id: b46a3397-654d-4ceb-be75-a322ea7e5091#015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: #015
Dec  3 02:20:13 compute-0 ovn_metadata_agent[288523]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  3 02:20:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:14.151 288634 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  3 02:20:14 compute-0 haproxy-metadata-proxy-b46a3397-654d-4ceb-be75-a322ea7e5091[452488]: 10.100.0.9:57142 [03/Dec/2025:02:20:13.888] listener listener/metadata 0/0/0/264/264 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  3 02:20:14 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:14.152 288634 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2619309#033[00m
Dec  3 02:20:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 681 B/s rd, 1.9 KiB/s wr, 0 op/s
Dec  3 02:20:15 compute-0 nova_compute[351485]: 2025-12-03 02:20:15.681 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.601 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.601 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.602 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.602 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.602 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.604 351492 INFO nova.compute.manager [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Terminating instance#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.605 351492 DEBUG nova.compute.manager [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:20:16 compute-0 kernel: tap0d927baf-41 (unregistering): left promiscuous mode
Dec  3 02:20:16 compute-0 NetworkManager[48912]: <info>  [1764728416.7454] device (tap0d927baf-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.754 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:16 compute-0 ovn_controller[89134]: 2025-12-03T02:20:16Z|00185|binding|INFO|Releasing lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 from this chassis (sb_readonly=0)
Dec  3 02:20:16 compute-0 ovn_controller[89134]: 2025-12-03T02:20:16Z|00186|binding|INFO|Setting lport 0d927baf-41d2-458f-b4c0-1218ba0eec13 down in Southbound
Dec  3 02:20:16 compute-0 ovn_controller[89134]: 2025-12-03T02:20:16Z|00187|binding|INFO|Removing iface tap0d927baf-41 ovn-installed in OVS
Dec  3 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.763 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:61:16 10.100.0.9'], port_security=['fa:16:3e:55:61:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '48201127-9aa0-4cde-a41d-6790411480a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b46a3397-654d-4ceb-be75-a322ea7e5091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38f1a4b24bc74f43a70b0fc06f48b9a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ad947c5-c226-4f50-af5d-711cff08343d b2c98479-d787-4d5e-b71b-1dd64682dc39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2444ad0-b9d4-4c2c-9115-6ef22db7fd9a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=0d927baf-41d2-458f-b4c0-1218ba0eec13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.765 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 0d927baf-41d2-458f-b4c0-1218ba0eec13 in datapath b46a3397-654d-4ceb-be75-a322ea7e5091 unbound from our chassis#033[00m
Dec  3 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.767 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b46a3397-654d-4ceb-be75-a322ea7e5091, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.768 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[ba424624-fc0d-445c-9938-a32562aa0b69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:16 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:16.769 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 namespace which is not needed anymore#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.787 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:16 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  3 02:20:16 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 46.842s CPU time.
Dec  3 02:20:16 compute-0 systemd-machined[138558]: Machine qemu-14-instance-0000000d terminated.
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.837 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.847 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.854 351492 INFO nova.virt.libvirt.driver [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Instance destroyed successfully.#033[00m
Dec  3 02:20:16 compute-0 nova_compute[351485]: 2025-12-03 02:20:16.855 351492 DEBUG nova.objects.instance [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lazy-loading 'resources' on Instance uuid 48201127-9aa0-4cde-a41d-6790411480a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : haproxy version is 2.8.14-c23fe91
Dec  3 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [NOTICE]   (452486) : path to executable is /usr/sbin/haproxy
Dec  3 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [WARNING]  (452486) : Exiting Master process...
Dec  3 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [ALERT]    (452486) : Current worker (452488) exited with code 143 (Terminated)
Dec  3 02:20:16 compute-0 neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091[452461]: [WARNING]  (452486) : All workers exited. Exiting... (0)
Dec  3 02:20:16 compute-0 systemd[1]: libpod-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb.scope: Deactivated successfully.
Dec  3 02:20:16 compute-0 podman[454425]: 2025-12-03 02:20:16.996586147 +0000 UTC m=+0.071010990 container died 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.003 351492 DEBUG nova.virt.libvirt.vif [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:18:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1226962462',display_name='tempest-TestServerBasicOps-server-1226962462',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1226962462',id=13,image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOrfBag91AFIZ3cgT/3v6DEUVxmWorZPsTvJBCT3v1fcFACxQDoahVOND6soOw4PzOfL8jvcBATzzdMnLLkWJn8sw8+PBGsPmPnV6EhNG8NjAI9UA8OPVUdoPITGd7W+8A==',key_name='tempest-TestServerBasicOps-954582748',keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:19:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='38f1a4b24bc74f43a70b0fc06f48b9a2',ramdisk_id='',reservation_id='r-qt8l6h9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ef773cba-72f0-486f-b5e5-792ff26bb688',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1222487710',owner_user_name='tempest-TestServerBasicOps-1222487710-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:20:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2de48f7608ea45c8ac558125d72373c4',uuid=48201127-9aa0-4cde-a41d-6790411480a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.005 351492 DEBUG nova.network.os_vif_util [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converting VIF {"id": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "address": "fa:16:3e:55:61:16", "network": {"id": "b46a3397-654d-4ceb-be75-a322ea7e5091", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1788173895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38f1a4b24bc74f43a70b0fc06f48b9a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d927baf-41", "ovs_interfaceid": "0d927baf-41d2-458f-b4c0-1218ba0eec13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.007 351492 DEBUG nova.network.os_vif_util [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.008 351492 DEBUG os_vif [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.012 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.013 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d927baf-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.017 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.022 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.027 351492 INFO os_vif [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:61:16,bridge_name='br-int',has_traffic_filtering=True,id=0d927baf-41d2-458f-b4c0-1218ba0eec13,network=Network(b46a3397-654d-4ceb-be75-a322ea7e5091),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d927baf-41')#033[00m
Dec  3 02:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb-userdata-shm.mount: Deactivated successfully.
Dec  3 02:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e3ca008127e0843a16153cba25a8cdfe9386b435396ea086db82b591e22278b-merged.mount: Deactivated successfully.
Dec  3 02:20:17 compute-0 podman[454425]: 2025-12-03 02:20:17.09246719 +0000 UTC m=+0.166891963 container cleanup 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:20:17 compute-0 systemd[1]: libpod-conmon-57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb.scope: Deactivated successfully.
Dec  3 02:20:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 7.2 KiB/s wr, 2 op/s
Dec  3 02:20:17 compute-0 podman[454468]: 2025-12-03 02:20:17.214909814 +0000 UTC m=+0.082662930 container remove 57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.235 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[b160dc28-9630-48ad-a7f3-1d35b5ca817a]: (4, ('Wed Dec  3 02:20:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 (57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb)\n57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb\nWed Dec  3 02:20:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 (57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb)\n57a8a60584e8dfa48c54c7f4c808b077f95b7cac7819fa02e6dc520c2bcbc2eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.238 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[acefaad2-a278-4985-aef8-af5953adedae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.239 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb46a3397-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.243 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:17 compute-0 kernel: tapb46a3397-60: left promiscuous mode
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.263 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.271 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9e1799-351d-4606-bef4-1dbbf6cb1ae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.289 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[de573608-ce00-4163-8a87-a6644b080c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.290 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[1e074a76-ee02-41d9-bc3c-8552d58ed06b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.319 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c44181-be02-49b2-ac46-d1b6c4eb9555]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718180, 'reachable_time': 33575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 454484, 'error': None, 'target': 'ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:17 compute-0 systemd[1]: run-netns-ovnmeta\x2db46a3397\x2d654d\x2d4ceb\x2dbe75\x2da322ea7e5091.mount: Deactivated successfully.
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.324 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b46a3397-654d-4ceb-be75-a322ea7e5091 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:20:17 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:17.325 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[008f8fcd-015c-4747-9b38-a6671e5b5847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.939 351492 INFO nova.virt.libvirt.driver [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deleting instance files /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4_del#033[00m
Dec  3 02:20:17 compute-0 nova_compute[351485]: 2025-12-03 02:20:17.941 351492 INFO nova.virt.libvirt.driver [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deletion of /var/lib/nova/instances/48201127-9aa0-4cde-a41d-6790411480a4_del complete#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.018 351492 INFO nova.compute.manager [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.019 351492 DEBUG oslo.service.loopingcall [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.019 351492 DEBUG nova.compute.manager [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.020 351492 DEBUG nova.network.neutron [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.055 351492 DEBUG nova.compute.manager [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-unplugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.055 351492 DEBUG oslo_concurrency.lockutils [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.056 351492 DEBUG oslo_concurrency.lockutils [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.056 351492 DEBUG oslo_concurrency.lockutils [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.056 351492 DEBUG nova.compute.manager [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] No waiting events found dispatching network-vif-unplugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.057 351492 DEBUG nova.compute.manager [req-c243c2bc-b676-4eb7-8f35-09ba5ec29257 req-e2902c0c-b780-4649-a8c7-58d3b62c53fd 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-unplugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:20:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.609 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.609 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.610 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.739 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:18 compute-0 nova_compute[351485]: 2025-12-03 02:20:18.835 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:18.836 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:20:18 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:18.843 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:20:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:20:19 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283653761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:20:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 216 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 7.6 KiB/s wr, 7 op/s
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.180 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.300 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.301 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.504 351492 DEBUG nova.network.neutron [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.539 351492 INFO nova.compute.manager [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Took 1.52 seconds to deallocate network for instance.#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.608 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.610 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.654 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.692 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.694 351492 DEBUG nova.compute.provider_tree [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.729 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.763 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.842 351492 DEBUG oslo_concurrency.processutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.982 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.985 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3778MB free_disk=59.8972053527832GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:20:19 compute-0 nova_compute[351485]: 2025-12-03 02:20:19.986 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.153 351492 DEBUG nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.155 351492 DEBUG oslo_concurrency.lockutils [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "48201127-9aa0-4cde-a41d-6790411480a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.156 351492 DEBUG oslo_concurrency.lockutils [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.157 351492 DEBUG oslo_concurrency.lockutils [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.158 351492 DEBUG nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] No waiting events found dispatching network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.160 351492 WARNING nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received unexpected event network-vif-plugged-0d927baf-41d2-458f-b4c0-1218ba0eec13 for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.161 351492 DEBUG nova.compute.manager [req-d67506dc-2b3c-4352-b81d-5854982088e7 req-5a007d13-6a5a-49d3-8113-90b2756fb42b 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Received event network-vif-deleted-0d927baf-41d2-458f-b4c0-1218ba0eec13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:20:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:20:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205015450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.373 351492 DEBUG oslo_concurrency.processutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.383 351492 DEBUG nova.compute.provider_tree [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.397 351492 DEBUG nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.417 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.421 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.470 351492 INFO nova.scheduler.client.report [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Deleted allocations for instance 48201127-9aa0-4cde-a41d-6790411480a4#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.523 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.525 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.526 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.559 351492 DEBUG oslo_concurrency.lockutils [None req-3972cf35-f40a-4196-a51d-b5876310640b 2de48f7608ea45c8ac558125d72373c4 38f1a4b24bc74f43a70b0fc06f48b9a2 - - default default] Lock "48201127-9aa0-4cde-a41d-6790411480a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:20 compute-0 nova_compute[351485]: 2025-12-03 02:20:20.586 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:20:20 compute-0 podman[454534]: 2025-12-03 02:20:20.887945825 +0000 UTC m=+0.144163240 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:20:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:20:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3420898867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.070 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.082 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.102 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.132 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:20:21 compute-0 nova_compute[351485]: 2025-12-03 02:20:21.133 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 8.4 KiB/s wr, 31 op/s
Dec  3 02:20:22 compute-0 nova_compute[351485]: 2025-12-03 02:20:22.019 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.3 KiB/s wr, 31 op/s
Dec  3 02:20:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:23 compute-0 nova_compute[351485]: 2025-12-03 02:20:23.742 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:24 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:24.848 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:20:24 compute-0 podman[454574]: 2025-12-03 02:20:24.871588412 +0000 UTC m=+0.110732244 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec  3 02:20:24 compute-0 podman[454575]: 2025-12-03 02:20:24.877944152 +0000 UTC m=+0.102865781 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:20:24 compute-0 podman[454578]: 2025-12-03 02:20:24.905276005 +0000 UTC m=+0.122398124 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:20:24 compute-0 podman[454576]: 2025-12-03 02:20:24.908764914 +0000 UTC m=+0.131735838 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec  3 02:20:24 compute-0 podman[454573]: 2025-12-03 02:20:24.932482545 +0000 UTC m=+0.176125184 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.136 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.136 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.137 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:20:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.7 KiB/s wr, 31 op/s
Dec  3 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.707 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.708 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.709 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:20:25 compute-0 nova_compute[351485]: 2025-12-03 02:20:25.710 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:20:27 compute-0 nova_compute[351485]: 2025-12-03 02:20:27.025 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 7.2 KiB/s wr, 30 op/s
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.079 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.107 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.108 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.109 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.110 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.110 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:20:28
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'images', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec  3 02:20:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.746 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:28 compute-0 ovn_controller[89134]: 2025-12-03T02:20:28Z|00188|binding|INFO|Releasing lport 50c454e1-4a4b-4aad-b47b-dafc7b079018 from this chassis (sb_readonly=0)
Dec  3 02:20:28 compute-0 nova_compute[351485]: 2025-12-03 02:20:28.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:20:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Dec  3 02:20:29 compute-0 ovn_controller[89134]: 2025-12-03T02:20:29Z|00189|binding|INFO|Releasing lport 50c454e1-4a4b-4aad-b47b-dafc7b079018 from this chassis (sb_readonly=0)
Dec  3 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.250 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.545 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.546 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:29 compute-0 nova_compute[351485]: 2025-12-03 02:20:29.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:29 compute-0 podman[158098]: time="2025-12-03T02:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:20:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Dec  3 02:20:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 KiB/s wr, 24 op/s
Dec  3 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:20:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:20:31 compute-0 openstack_network_exporter[368278]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:20:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:20:31 compute-0 nova_compute[351485]: 2025-12-03 02:20:31.849 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764728416.846394, 48201127-9aa0-4cde-a41d-6790411480a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:20:31 compute-0 nova_compute[351485]: 2025-12-03 02:20:31.851 351492 INFO nova.compute.manager [-] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:20:31 compute-0 nova_compute[351485]: 2025-12-03 02:20:31.878 351492 DEBUG nova.compute.manager [None req-89ec3af4-db0a-4a58-8dbc-67cb64d9a8f3 - - - - - -] [instance: 48201127-9aa0-4cde-a41d-6790411480a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:20:32 compute-0 nova_compute[351485]: 2025-12-03 02:20:32.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 1022 B/s wr, 0 op/s
Dec  3 02:20:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:33 compute-0 nova_compute[351485]: 2025-12-03 02:20:33.749 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 1022 B/s wr, 0 op/s
Dec  3 02:20:35 compute-0 nova_compute[351485]: 2025-12-03 02:20:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:20:35 compute-0 nova_compute[351485]: 2025-12-03 02:20:35.580 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:20:37 compute-0 nova_compute[351485]: 2025-12-03 02:20:37.035 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:20:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:38 compute-0 nova_compute[351485]: 2025-12-03 02:20:38.750 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007578104650973498 of space, bias 1.0, pg target 0.22734313952920493 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:20:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:20:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:20:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:20:42 compute-0 nova_compute[351485]: 2025-12-03 02:20:42.039 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:20:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:43 compute-0 nova_compute[351485]: 2025-12-03 02:20:43.754 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:44 compute-0 podman[454674]: 2025-12-03 02:20:44.803102351 +0000 UTC m=+0.094289474 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:20:44 compute-0 podman[454675]: 2025-12-03 02:20:44.832493551 +0000 UTC m=+0.112175799 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 02:20:44 compute-0 podman[454676]: 2025-12-03 02:20:44.838268014 +0000 UTC m=+0.091649829 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:20:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:20:47 compute-0 nova_compute[351485]: 2025-12-03 02:20:47.042 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:20:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3041175653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:20:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:20:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3041175653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:20:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:20:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:48 compute-0 nova_compute[351485]: 2025-12-03 02:20:48.759 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:20:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:20:51 compute-0 podman[454730]: 2025-12-03 02:20:51.897848609 +0000 UTC m=+0.144893983 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  3 02:20:52 compute-0 nova_compute[351485]: 2025-12-03 02:20:52.045 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:20:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:53 compute-0 nova_compute[351485]: 2025-12-03 02:20:53.763 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:20:55 compute-0 podman[454750]: 2025-12-03 02:20:55.863631699 +0000 UTC m=+0.103861054 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 02:20:55 compute-0 podman[454752]: 2025-12-03 02:20:55.88559814 +0000 UTC m=+0.111657215 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64)
Dec  3 02:20:55 compute-0 podman[454755]: 2025-12-03 02:20:55.896438996 +0000 UTC m=+0.128476200 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  3 02:20:55 compute-0 podman[454751]: 2025-12-03 02:20:55.901983192 +0000 UTC m=+0.131675809 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:20:55 compute-0 podman[454749]: 2025-12-03 02:20:55.914829465 +0000 UTC m=+0.157963492 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 02:20:57 compute-0 nova_compute[351485]: 2025-12-03 02:20:57.048 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  3 02:20:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:20:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:20:58 compute-0 nova_compute[351485]: 2025-12-03 02:20:58.767 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:20:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  3 02:20:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:59.653 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:20:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:59.654 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:20:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:20:59.654 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:20:59 compute-0 podman[158098]: time="2025-12-03T02:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:20:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8661 "" "Go-http-client/1.1"
Dec  3 02:21:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  3 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:21:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:21:01 compute-0 openstack_network_exporter[368278]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:21:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:21:02 compute-0 nova_compute[351485]: 2025-12-03 02:21:02.052 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  3 02:21:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:03 compute-0 nova_compute[351485]: 2025-12-03 02:21:03.770 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  3 02:21:06 compute-0 ovn_controller[89134]: 2025-12-03T02:21:06Z|00190|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec  3 02:21:07 compute-0 nova_compute[351485]: 2025-12-03 02:21:07.056 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  3 02:21:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:08 compute-0 nova_compute[351485]: 2025-12-03 02:21:08.772 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:21:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4a0ee2e1-c667-4b50-a81f-3a6d184270d6 does not exist
Dec  3 02:21:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d5bbfe0e-7364-4114-9ce7-2848f397aeb9 does not exist
Dec  3 02:21:09 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9a9bf96b-90e8-4d64-9b5a-d05a6728e568 does not exist
Dec  3 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:21:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:21:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:21:10 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.761580512 +0000 UTC m=+0.084979201 container create 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.727647524 +0000 UTC m=+0.051046243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:21:10 compute-0 systemd[1]: Started libpod-conmon-2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898.scope.
Dec  3 02:21:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.920007336 +0000 UTC m=+0.243406065 container init 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.937804009 +0000 UTC m=+0.261202688 container start 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.944942151 +0000 UTC m=+0.268340860 container attach 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 02:21:10 compute-0 crazy_hertz[455135]: 167 167
Dec  3 02:21:10 compute-0 systemd[1]: libpod-2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898.scope: Deactivated successfully.
Dec  3 02:21:10 compute-0 podman[455119]: 2025-12-03 02:21:10.956331762 +0000 UTC m=+0.279730441 container died 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-df070375a94010c9c5a20c5eef1573668a3163d290669d2a8bd35ab8ebef2997-merged.mount: Deactivated successfully.
Dec  3 02:21:11 compute-0 podman[455119]: 2025-12-03 02:21:11.035399545 +0000 UTC m=+0.358798204 container remove 2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:21:11 compute-0 systemd[1]: libpod-conmon-2186ce8074cf784ad3fa48af6b774530fdea8a47b42454841aea950d6bcb0898.scope: Deactivated successfully.
Dec  3 02:21:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.309970969 +0000 UTC m=+0.077320774 container create 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.283866141 +0000 UTC m=+0.051215956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:21:11 compute-0 systemd[1]: Started libpod-conmon-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope.
Dec  3 02:21:11 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.481188194 +0000 UTC m=+0.248538049 container init 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.526194685 +0000 UTC m=+0.293544500 container start 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:21:11 compute-0 podman[455158]: 2025-12-03 02:21:11.533060359 +0000 UTC m=+0.300410174 container attach 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 02:21:12 compute-0 nova_compute[351485]: 2025-12-03 02:21:12.058 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:12 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 02:21:12 compute-0 hopeful_poincare[455174]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:21:12 compute-0 hopeful_poincare[455174]: --> relative data size: 1.0
Dec  3 02:21:12 compute-0 hopeful_poincare[455174]: --> All data devices are unavailable
Dec  3 02:21:12 compute-0 systemd[1]: libpod-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope: Deactivated successfully.
Dec  3 02:21:12 compute-0 systemd[1]: libpod-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope: Consumed 1.221s CPU time.
Dec  3 02:21:12 compute-0 podman[455204]: 2025-12-03 02:21:12.890884987 +0000 UTC m=+0.054898332 container died 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e18a8033d7dd0d8388ff85ab5a97e648934b5fdb4b9db8bc220312d1e03fc6a-merged.mount: Deactivated successfully.
Dec  3 02:21:12 compute-0 podman[455204]: 2025-12-03 02:21:12.965268238 +0000 UTC m=+0.129281593 container remove 1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 02:21:12 compute-0 systemd[1]: libpod-conmon-1f708ac1f366c0067f4ad1cfd14875670117d68fdcffa0f57e0380b366be18c8.scope: Deactivated successfully.
Dec  3 02:21:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:13 compute-0 nova_compute[351485]: 2025-12-03 02:21:13.774 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.037243482 +0000 UTC m=+0.057846184 container create a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 02:21:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 02:21:14 compute-0 systemd[1]: Started libpod-conmon-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope.
Dec  3 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.01522769 +0000 UTC m=+0.035830482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:21:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.14798567 +0000 UTC m=+0.168588422 container init a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.163475337 +0000 UTC m=+0.184078049 container start a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.169984511 +0000 UTC m=+0.190587263 container attach a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 02:21:14 compute-0 vigilant_booth[455366]: 167 167
Dec  3 02:21:14 compute-0 systemd[1]: libpod-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope: Deactivated successfully.
Dec  3 02:21:14 compute-0 conmon[455366]: conmon a4e5ca782f10c851d5e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope/container/memory.events
Dec  3 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.176213897 +0000 UTC m=+0.196816609 container died a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c27f156ffd3591e36728d5c3be233615f7c7a6012e042b5de41c0f7c80f05812-merged.mount: Deactivated successfully.
Dec  3 02:21:14 compute-0 podman[455349]: 2025-12-03 02:21:14.234764361 +0000 UTC m=+0.255367053 container remove a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 02:21:14 compute-0 systemd[1]: libpod-conmon-a4e5ca782f10c851d5e1c3df982067dc8ed4ef9c8f8a35512b9291b56a3d327e.scope: Deactivated successfully.
Dec  3 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.550526878 +0000 UTC m=+0.096990050 container create db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:21:14 compute-0 nova_compute[351485]: 2025-12-03 02:21:14.581 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.515496989 +0000 UTC m=+0.061960211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:21:14 compute-0 systemd[1]: Started libpod-conmon-db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536.scope.
Dec  3 02:21:14 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.773351201 +0000 UTC m=+0.319814463 container init db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.796296999 +0000 UTC m=+0.342760151 container start db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:21:14 compute-0 podman[455388]: 2025-12-03 02:21:14.80234093 +0000 UTC m=+0.348804182 container attach db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:21:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:15 compute-0 gallant_gates[455404]: {
Dec  3 02:21:15 compute-0 gallant_gates[455404]:    "0": [
Dec  3 02:21:15 compute-0 gallant_gates[455404]:        {
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "devices": [
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "/dev/loop3"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            ],
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_name": "ceph_lv0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_size": "21470642176",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "name": "ceph_lv0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "tags": {
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cluster_name": "ceph",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.crush_device_class": "",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.encrypted": "0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osd_id": "0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.type": "block",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.vdo": "0"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            },
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "type": "block",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "vg_name": "ceph_vg0"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:        }
Dec  3 02:21:15 compute-0 gallant_gates[455404]:    ],
Dec  3 02:21:15 compute-0 gallant_gates[455404]:    "1": [
Dec  3 02:21:15 compute-0 gallant_gates[455404]:        {
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "devices": [
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "/dev/loop4"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            ],
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_name": "ceph_lv1",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_size": "21470642176",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "name": "ceph_lv1",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "tags": {
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cluster_name": "ceph",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.crush_device_class": "",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.encrypted": "0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osd_id": "1",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.type": "block",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.vdo": "0"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            },
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "type": "block",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "vg_name": "ceph_vg1"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:        }
Dec  3 02:21:15 compute-0 gallant_gates[455404]:    ],
Dec  3 02:21:15 compute-0 gallant_gates[455404]:    "2": [
Dec  3 02:21:15 compute-0 gallant_gates[455404]:        {
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "devices": [
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "/dev/loop5"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            ],
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_name": "ceph_lv2",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_size": "21470642176",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "name": "ceph_lv2",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "tags": {
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.cluster_name": "ceph",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.crush_device_class": "",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.encrypted": "0",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osd_id": "2",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.type": "block",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:                "ceph.vdo": "0"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            },
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "type": "block",
Dec  3 02:21:15 compute-0 gallant_gates[455404]:            "vg_name": "ceph_vg2"
Dec  3 02:21:15 compute-0 gallant_gates[455404]:        }
Dec  3 02:21:15 compute-0 gallant_gates[455404]:    ]
Dec  3 02:21:15 compute-0 gallant_gates[455404]: }
Dec  3 02:21:15 compute-0 systemd[1]: libpod-db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536.scope: Deactivated successfully.
Dec  3 02:21:15 compute-0 podman[455388]: 2025-12-03 02:21:15.659436525 +0000 UTC m=+1.205899677 container died db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c294b47e3d40a65f8a2a3252f9615e6e750cd2766e5f52fd6733f9618d1ec05-merged.mount: Deactivated successfully.
Dec  3 02:21:15 compute-0 podman[455388]: 2025-12-03 02:21:15.753918804 +0000 UTC m=+1.300381966 container remove db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:21:15 compute-0 systemd[1]: libpod-conmon-db42c80b62723d1943a852a9ea80ee7ee1cbf70b833eafb7c8e1a354b50a6536.scope: Deactivated successfully.
Dec  3 02:21:15 compute-0 podman[455424]: 2025-12-03 02:21:15.816677286 +0000 UTC m=+0.094429028 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:21:15 compute-0 podman[455422]: 2025-12-03 02:21:15.814831294 +0000 UTC m=+0.097778933 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 02:21:15 compute-0 podman[455414]: 2025-12-03 02:21:15.839743377 +0000 UTC m=+0.136154436 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  3 02:21:16 compute-0 podman[455618]: 2025-12-03 02:21:16.887065346 +0000 UTC m=+0.090432985 container create 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 02:21:16 compute-0 podman[455618]: 2025-12-03 02:21:16.847866229 +0000 UTC m=+0.051233918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:21:16 compute-0 systemd[1]: Started libpod-conmon-8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970.scope.
Dec  3 02:21:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.046258312 +0000 UTC m=+0.249625951 container init 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.06070309 +0000 UTC m=+0.264070699 container start 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 02:21:17 compute-0 nova_compute[351485]: 2025-12-03 02:21:17.061 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.066088572 +0000 UTC m=+0.269456181 container attach 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 02:21:17 compute-0 sweet_elgamal[455634]: 167 167
Dec  3 02:21:17 compute-0 systemd[1]: libpod-8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970.scope: Deactivated successfully.
Dec  3 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.077426312 +0000 UTC m=+0.280793951 container died 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec  3 02:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e352f0acd372fd2763ff8eefdc5c70419cf3cb2543b1f644a85956bc8e16e7b4-merged.mount: Deactivated successfully.
Dec  3 02:21:17 compute-0 podman[455618]: 2025-12-03 02:21:17.155941049 +0000 UTC m=+0.359308668 container remove 8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:21:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:17 compute-0 systemd[1]: libpod-conmon-8cbece5e69342541a387b201a63dd82bd400da1a7ff385569de9a8b038cf0970.scope: Deactivated successfully.
Dec  3 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.46863271 +0000 UTC m=+0.098994706 container create 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.427958612 +0000 UTC m=+0.058320668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:21:17 compute-0 systemd[1]: Started libpod-conmon-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope.
Dec  3 02:21:17 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.650048324 +0000 UTC m=+0.280410350 container init 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.673030303 +0000 UTC m=+0.303392329 container start 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:21:17 compute-0 podman[455660]: 2025-12-03 02:21:17.679778374 +0000 UTC m=+0.310140470 container attach 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 02:21:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:18 compute-0 nova_compute[351485]: 2025-12-03 02:21:18.778 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:18 compute-0 frosty_kilby[455676]: {
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "osd_id": 2,
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "type": "bluestore"
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:    },
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "osd_id": 1,
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "type": "bluestore"
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:    },
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "osd_id": 0,
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:        "type": "bluestore"
Dec  3 02:21:18 compute-0 frosty_kilby[455676]:    }
Dec  3 02:21:18 compute-0 frosty_kilby[455676]: }
Dec  3 02:21:18 compute-0 systemd[1]: libpod-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope: Deactivated successfully.
Dec  3 02:21:18 compute-0 systemd[1]: libpod-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope: Consumed 1.242s CPU time.
Dec  3 02:21:18 compute-0 podman[455709]: 2025-12-03 02:21:18.993182626 +0000 UTC m=+0.051166156 container died 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6184be35db45b6cedb11841dfc5344c61d7efbc121c9d4005e1d7b1a0acfde36-merged.mount: Deactivated successfully.
Dec  3 02:21:19 compute-0 podman[455709]: 2025-12-03 02:21:19.16825106 +0000 UTC m=+0.226234450 container remove 0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec  3 02:21:19 compute-0 systemd[1]: libpod-conmon-0e0bbbc91d10eeeb10cb46d26d0118fa7cdcf3608cadbd3d07d8290f035f037c.scope: Deactivated successfully.
Dec  3 02:21:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:21:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:21:19 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:21:19 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:21:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e7e81c02-08ad-41d5-9dbd-c4521bb1db0d does not exist
Dec  3 02:21:19 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a38a251f-2b48-4871-9505-9ad7a3514fed does not exist
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.512 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.513 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.514 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.524 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.525 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:21:19.525951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.564 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 43.4296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:21:19.566663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.572 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.575 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 1172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:21:19.574736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.583 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:21:19.583300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.588 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.591 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:21:19.587853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:21:19.591408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:21:19.595736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.615 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.616 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.617 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:21:19.619707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.638 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.640 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.642 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:21:19 compute-0 nova_compute[351485]: 2025-12-03 02:21:19.643 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.686 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 30342144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.687 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.689 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:21:19.688499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:21:19.690614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2892253301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 193523124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:21:19.692630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.693 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.695 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:21:19.694785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:21:19.696811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.697 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:21:19.699088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.699 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.700 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:21:19.701184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.701 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 9924409915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.702 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:21:19.703224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.703 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.707 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:21:19.706186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.710 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 122560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.713 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:21:19.709883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:21:19.711706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:21:19.712992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:21:19.714854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.715 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:21:19.716570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:21:19.717975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:21:19.719284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.724 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:21:19.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:21:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:21:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220651878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.121 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.234 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.235 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:21:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:21:20 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.738 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.740 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3747MB free_disk=59.94282150268555GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.741 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.741 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.825 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.826 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.826 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:21:20 compute-0 nova_compute[351485]: 2025-12-03 02:21:20.866 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:21:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec  3 02:21:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:21:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786476869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.470 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.489 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.516 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.522 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:21:21 compute-0 nova_compute[351485]: 2025-12-03 02:21:21.523 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:21:22 compute-0 nova_compute[351485]: 2025-12-03 02:21:22.067 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:22 compute-0 podman[455821]: 2025-12-03 02:21:22.891978204 +0000 UTC m=+0.128765917 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 02:21:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec  3 02:21:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:23 compute-0 nova_compute[351485]: 2025-12-03 02:21:23.780 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Dec  3 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.524 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.525 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.526 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.710 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.711 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.711 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:21:26 compute-0 nova_compute[351485]: 2025-12-03 02:21:26.712 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:21:26 compute-0 podman[455843]: 2025-12-03 02:21:26.869726803 +0000 UTC m=+0.119626010 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350)
Dec  3 02:21:26 compute-0 podman[455845]: 2025-12-03 02:21:26.87918555 +0000 UTC m=+0.113217269 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, vcs-type=git, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler)
Dec  3 02:21:26 compute-0 podman[455842]: 2025-12-03 02:21:26.887755752 +0000 UTC m=+0.139923063 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:21:26 compute-0 podman[455844]: 2025-12-03 02:21:26.891854888 +0000 UTC m=+0.128333946 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:21:26 compute-0 podman[455846]: 2025-12-03 02:21:26.90150003 +0000 UTC m=+0.134042017 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:21:27 compute-0 nova_compute[351485]: 2025-12-03 02:21:27.070 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Dec  3 02:21:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:21:28
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root']
Dec  3 02:21:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:21:28 compute-0 nova_compute[351485]: 2025-12-03 02:21:28.783 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:21:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec  3 02:21:29 compute-0 podman[158098]: time="2025-12-03T02:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:21:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8655 "" "Go-http-client/1.1"
Dec  3 02:21:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec  3 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:21:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:21:31 compute-0 openstack_network_exporter[368278]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:21:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.797 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.817 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.818 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.820 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.820 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.821 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:31 compute-0 nova_compute[351485]: 2025-12-03 02:21:31.822 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:32 compute-0 nova_compute[351485]: 2025-12-03 02:21:32.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:32 compute-0 nova_compute[351485]: 2025-12-03 02:21:32.868 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Dec  3 02:21:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:33 compute-0 nova_compute[351485]: 2025-12-03 02:21:33.786 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 62 op/s
Dec  3 02:21:35 compute-0 nova_compute[351485]: 2025-12-03 02:21:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:35 compute-0 nova_compute[351485]: 2025-12-03 02:21:35.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:21:37 compute-0 nova_compute[351485]: 2025-12-03 02:21:37.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Dec  3 02:21:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:38 compute-0 nova_compute[351485]: 2025-12-03 02:21:38.789 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007578104650973498 of space, bias 1.0, pg target 0.22734313952920493 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:21:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:21:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Dec  3 02:21:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:42 compute-0 nova_compute[351485]: 2025-12-03 02:21:42.079 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:43 compute-0 nova_compute[351485]: 2025-12-03 02:21:43.793 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.576 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.577 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.578 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.578 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.578 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.579 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.595 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.604 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.604 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Image id 8876482c-db67-48c0-9203-60685152fc9d yields fingerprint 3a2172ba33277b1fb4d8f3381bb190374609d10e _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.605 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] image 8876482c-db67-48c0-9203-60685152fc9d at (/var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e): checking#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.605 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] image 8876482c-db67-48c0-9203-60685152fc9d at (/var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.608 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.609 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.609 351492 WARNING nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.609 351492 WARNING nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.610 351492 WARNING nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.610 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Active base files: /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.610 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Removable base files: /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8 /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4 /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.611 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b9e804eb90834f1320f9fd6c25a03e15d4052aa8#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.611 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c29aeb8fc873eee85b0369901388993e8201c8d4#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.611 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d68b22249947adf9ae6139a52d3c87b68df8a601#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 DEBUG nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  3 02:21:45 compute-0 nova_compute[351485]: 2025-12-03 02:21:45.612 351492 INFO nova.virt.libvirt.imagecache [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  3 02:21:46 compute-0 podman[455943]: 2025-12-03 02:21:46.890949826 +0000 UTC m=+0.122620044 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Dec  3 02:21:46 compute-0 podman[455942]: 2025-12-03 02:21:46.908962194 +0000 UTC m=+0.146511158 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:21:46 compute-0 podman[455944]: 2025-12-03 02:21:46.921384175 +0000 UTC m=+0.156186512 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:21:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2028091222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:21:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:21:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2028091222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:21:47 compute-0 nova_compute[351485]: 2025-12-03 02:21:47.083 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.488701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507488842, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1591, "num_deletes": 251, "total_data_size": 2575246, "memory_usage": 2615792, "flush_reason": "Manual Compaction"}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507510857, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2528507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40114, "largest_seqno": 41704, "table_properties": {"data_size": 2521096, "index_size": 4418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15353, "raw_average_key_size": 20, "raw_value_size": 2506205, "raw_average_value_size": 3276, "num_data_blocks": 197, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728339, "oldest_key_time": 1764728339, "file_creation_time": 1764728507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 22243 microseconds, and 12752 cpu microseconds.
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.510956) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2528507 bytes OK
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.510983) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.514377) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.514402) EVENT_LOG_v1 {"time_micros": 1764728507514395, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.514425) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2568334, prev total WAL file size 2568334, number of live WAL files 2.
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.516014) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2469KB)], [95(7028KB)]
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507516060, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9725281, "oldest_snapshot_seqno": -1}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5777 keys, 7972853 bytes, temperature: kUnknown
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507599460, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7972853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7936013, "index_size": 21306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149768, "raw_average_key_size": 25, "raw_value_size": 7833272, "raw_average_value_size": 1355, "num_data_blocks": 848, "num_entries": 5777, "num_filter_entries": 5777, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.600676) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7972853 bytes
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.604052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.3 rd, 94.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 6296, records dropped: 519 output_compression: NoCompression
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.604084) EVENT_LOG_v1 {"time_micros": 1764728507604069, "job": 56, "event": "compaction_finished", "compaction_time_micros": 84345, "compaction_time_cpu_micros": 36834, "output_level": 6, "num_output_files": 1, "total_output_size": 7972853, "num_input_records": 6296, "num_output_records": 5777, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507607111, "job": 56, "event": "table_file_deletion", "file_number": 97}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728507610788, "job": 56, "event": "table_file_deletion", "file_number": 95}
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.515824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:21:47 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:21:47.611995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:21:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:48 compute-0 nova_compute[351485]: 2025-12-03 02:21:48.795 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:52 compute-0 nova_compute[351485]: 2025-12-03 02:21:52.086 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:53 compute-0 nova_compute[351485]: 2025-12-03 02:21:53.800 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:53 compute-0 podman[456001]: 2025-12-03 02:21:53.885238326 +0000 UTC m=+0.135039735 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:21:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:57 compute-0 nova_compute[351485]: 2025-12-03 02:21:57.090 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:57 compute-0 podman[456022]: 2025-12-03 02:21:57.876467404 +0000 UTC m=+0.105195991 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:21:57 compute-0 podman[456021]: 2025-12-03 02:21:57.887207837 +0000 UTC m=+0.123684753 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:21:57 compute-0 podman[456023]: 2025-12-03 02:21:57.895854362 +0000 UTC m=+0.117133939 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc.)
Dec  3 02:21:57 compute-0 podman[456020]: 2025-12-03 02:21:57.897829387 +0000 UTC m=+0.143152213 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:21:57 compute-0 podman[456028]: 2025-12-03 02:21:57.910192197 +0000 UTC m=+0.127244515 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:21:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:21:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:21:58 compute-0 nova_compute[351485]: 2025-12-03 02:21:58.802 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:21:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:21:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:21:59.655 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:21:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:21:59.656 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:21:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:21:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:21:59 compute-0 podman[158098]: time="2025-12-03T02:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:21:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 02:22:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:22:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:22:01 compute-0 openstack_network_exporter[368278]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:22:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:22:02 compute-0 nova_compute[351485]: 2025-12-03 02:22:02.095 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:03 compute-0 nova_compute[351485]: 2025-12-03 02:22:03.808 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:07 compute-0 nova_compute[351485]: 2025-12-03 02:22:07.098 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:08 compute-0 nova_compute[351485]: 2025-12-03 02:22:08.815 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.102 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.386 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.387 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.417 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 02:22:12 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 02:22:12 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.532 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.534 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.550 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.551 351492 INFO nova.compute.claims [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 02:22:12 compute-0 nova_compute[351485]: 2025-12-03 02:22:12.729 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:22:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490501508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:22:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.263 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.276 351492 DEBUG nova.compute.provider_tree [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.298 351492 DEBUG nova.scheduler.client.report [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.323 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.324 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.381 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.382 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 02:22:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.409 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.433 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.556 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.558 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.559 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Creating image(s)#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.609 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.665 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.725 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.735 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.768 351492 DEBUG nova.policy [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63f39ac2863946b8b817457e689ff933', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.819 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.832 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.833 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.834 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.834 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "3a2172ba33277b1fb4d8f3381bb190374609d10e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.874 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:22:13 compute-0 nova_compute[351485]: 2025-12-03 02:22:13.883 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.340 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3a2172ba33277b1fb4d8f3381bb190374609d10e 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.525 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] resizing rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.621 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.792 351492 DEBUG nova.objects.instance [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'migration_context' on Instance uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.810 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.811 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Ensure instance console log exists: /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.811 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.812 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:14 compute-0 nova_compute[351485]: 2025-12-03 02:22:14.812 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 164 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 114 KiB/s wr, 9 op/s
Dec  3 02:22:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:15.782 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:22:15 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:15.783 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:22:15 compute-0 nova_compute[351485]: 2025-12-03 02:22:15.787 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:15 compute-0 nova_compute[351485]: 2025-12-03 02:22:15.922 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Successfully created port: 94fdb5b9-66bf-4e81-b411-064b08e4c71c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  3 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.106 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 191 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 961 KiB/s wr, 24 op/s
Dec  3 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.804 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Successfully updated port: 94fdb5b9-66bf-4e81-b411-064b08e4c71c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  3 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.822 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.822 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:22:17 compute-0 nova_compute[351485]: 2025-12-03 02:22:17.822 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 02:22:17 compute-0 podman[456308]: 2025-12-03 02:22:17.876331065 +0000 UTC m=+0.114355151 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 02:22:17 compute-0 podman[456307]: 2025-12-03 02:22:17.88928441 +0000 UTC m=+0.135760155 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:22:17 compute-0 podman[456309]: 2025-12-03 02:22:17.892383058 +0000 UTC m=+0.117553681 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.015 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.132 351492 DEBUG nova.compute.manager [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-changed-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.133 351492 DEBUG nova.compute.manager [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Refreshing instance network info cache due to event network-changed-94fdb5b9-66bf-4e81-b411-064b08e4c71c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  3 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.134 351492 DEBUG oslo_concurrency.lockutils [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:22:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:18 compute-0 nova_compute[351485]: 2025-12-03 02:22:18.824 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.117 351492 DEBUG nova.network.neutron [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.144 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.146 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance network_info: |[{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.148 351492 DEBUG oslo_concurrency.lockutils [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.149 351492 DEBUG nova.network.neutron [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Refreshing network info cache for port 94fdb5b9-66bf-4e81-b411-064b08e4c71c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.156 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start _get_guest_xml network_info=[{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encrypted': False, 'encryption_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '8876482c-db67-48c0-9203-60685152fc9d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.178 351492 WARNING nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.190 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.191 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.208 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.210 351492 DEBUG nova.virt.libvirt.host [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.211 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.212 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T02:14:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='89219634-32e9-4cb5-896f-6fa0b1edfe13',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-03T02:18:51Z,direct_url=<?>,disk_format='qcow2',id=8876482c-db67-48c0-9203-60685152fc9d,min_disk=0,min_ram=0,name='tempest-scenario-img--863028734',owner='63f39ac2863946b8b817457e689ff933',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-03T02:18:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.213 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.214 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.215 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.216 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.216 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.217 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.218 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.218 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.219 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.221 351492 DEBUG nova.virt.hardware [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.226 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:20 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:22:20 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698277242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.797 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.850 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:22:20 compute-0 nova_compute[351485]: 2025-12-03 02:22:20.864 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 02:22:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  3 02:22:21 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292561624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.400 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.403 351492 DEBUG nova.virt.libvirt.vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',id=15,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-xvixyek3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:22:13Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=4fb8fc07-d7b7-4be8-94da-155b040faf32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.404 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.405 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.406 351492 DEBUG nova.objects.instance [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.424 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] End _get_guest_xml xml=<domain type="kvm">
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <uuid>4fb8fc07-d7b7-4be8-94da-155b040faf32</uuid>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <name>instance-0000000f</name>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <memory>131072</memory>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <vcpu>1</vcpu>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <metadata>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <nova:name>te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q</nova:name>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <nova:creationTime>2025-12-03 02:22:20</nova:creationTime>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <nova:flavor name="m1.nano">
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:memory>128</nova:memory>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:disk>1</nova:disk>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:swap>0</nova:swap>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:vcpus>1</nova:vcpus>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      </nova:flavor>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <nova:owner>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:user uuid="8f61f44789494541b7c101b0fdab52f0">tempest-PrometheusGabbiTest-1008659157-project-member</nova:user>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:project uuid="63f39ac2863946b8b817457e689ff933">tempest-PrometheusGabbiTest-1008659157</nova:project>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      </nova:owner>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <nova:root type="image" uuid="8876482c-db67-48c0-9203-60685152fc9d"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <nova:ports>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <nova:port uuid="94fdb5b9-66bf-4e81-b411-064b08e4c71c">
Dec  3 02:22:21 compute-0 nova_compute[351485]:          <nova:ip type="fixed" address="10.100.1.46" ipVersion="4"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:        </nova:port>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      </nova:ports>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </nova:instance>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  </metadata>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <sysinfo type="smbios">
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <system>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <entry name="manufacturer">RDO</entry>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <entry name="product">OpenStack Compute</entry>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <entry name="serial">4fb8fc07-d7b7-4be8-94da-155b040faf32</entry>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <entry name="uuid">4fb8fc07-d7b7-4be8-94da-155b040faf32</entry>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <entry name="family">Virtual Machine</entry>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </system>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  </sysinfo>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <os>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <boot dev="hd"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <smbios mode="sysinfo"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  </os>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <features>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <acpi/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <apic/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <vmcoreinfo/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  </features>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <clock offset="utc">
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <timer name="hpet" present="no"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  </clock>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <cpu mode="host-model" match="exact">
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  </cpu>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  <devices>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <disk type="network" device="disk">
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/4fb8fc07-d7b7-4be8-94da-155b040faf32_disk">
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      </source>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <target dev="vda" bus="virtio"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <disk type="network" device="cdrom">
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <driver type="raw" cache="none"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <source protocol="rbd" name="vms/4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config">
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <host name="192.168.122.100" port="6789"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      </source>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <auth username="openstack">
Dec  3 02:22:21 compute-0 nova_compute[351485]:        <secret type="ceph" uuid="3765feb2-36f8-5b86-b74c-64e9221f9c4c"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      </auth>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <target dev="sda" bus="sata"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </disk>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <interface type="ethernet">
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <mac address="fa:16:3e:3f:0c:ae"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <driver name="vhost" rx_queue_size="512"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <mtu size="1442"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <target dev="tap94fdb5b9-66"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </interface>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <serial type="pty">
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <log file="/var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/console.log" append="off"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </serial>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <video>
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <model type="virtio"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </video>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <input type="tablet" bus="usb"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <rng model="virtio">
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <backend model="random">/dev/urandom</backend>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </rng>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <controller type="usb" index="0"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    <memballoon model="virtio">
Dec  3 02:22:21 compute-0 nova_compute[351485]:      <stats period="10"/>
Dec  3 02:22:21 compute-0 nova_compute[351485]:    </memballoon>
Dec  3 02:22:21 compute-0 nova_compute[351485]:  </devices>
Dec  3 02:22:21 compute-0 nova_compute[351485]: </domain>
Dec  3 02:22:21 compute-0 nova_compute[351485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.426 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Preparing to wait for external event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.426 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.427 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.427 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.428 351492 DEBUG nova.virt.libvirt.vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-03T02:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',id=15,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-xvixyek3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-03T02:22:13Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=4fb8fc07-d7b7-4be8-94da-155b040faf32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.428 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.429 351492 DEBUG nova.network.os_vif_util [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.430 351492 DEBUG os_vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.431 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.432 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.432 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.436 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.436 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94fdb5b9-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.437 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94fdb5b9-66, col_values=(('external_ids', {'iface-id': '94fdb5b9-66bf-4e81-b411-064b08e4c71c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:0c:ae', 'vm-uuid': '4fb8fc07-d7b7-4be8-94da-155b040faf32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.440 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:21 compute-0 NetworkManager[48912]: <info>  [1764728541.4411] manager: (tap94fdb5b9-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.443 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.450 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.451 351492 INFO os_vif [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66')#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.599 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.600 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.600 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] No VIF found with MAC fa:16:3e:3f:0c:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.601 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Using config drive#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.655 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.689 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.690 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.690 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.690 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:22:21 compute-0 nova_compute[351485]: 2025-12-03 02:22:21.691 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:21.787 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:22:21 compute-0 podman[456734]: 2025-12-03 02:22:21.949779346 +0000 UTC m=+0.073561199 container create 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:21.911131794 +0000 UTC m=+0.034913677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:22 compute-0 systemd[1]: Started libpod-conmon-3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6.scope.
Dec  3 02:22:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.085460008 +0000 UTC m=+0.209241941 container init 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.099269438 +0000 UTC m=+0.223051321 container start 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.107655215 +0000 UTC m=+0.231437148 container attach 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:22:22 compute-0 festive_yonath[456752]: 167 167
Dec  3 02:22:22 compute-0 systemd[1]: libpod-3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6.scope: Deactivated successfully.
Dec  3 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.110154365 +0000 UTC m=+0.233936248 container died 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.145 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Creating config drive at /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.156 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dz43iat execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8af11b101b1fc17136d12595b7f914b9cd8d6b235134db42374acf87d6bb8585-merged.mount: Deactivated successfully.
Dec  3 02:22:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:22:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729463512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:22:22 compute-0 podman[456734]: 2025-12-03 02:22:22.193790867 +0000 UTC m=+0.317572720 container remove 3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.212 351492 DEBUG nova.network.neutron [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated VIF entry in instance network info cache for port 94fdb5b9-66bf-4e81-b411-064b08e4c71c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.213 351492 DEBUG nova.network.neutron [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:22:22 compute-0 systemd[1]: libpod-conmon-3ab3eabca71d7a5541aa4db7047b4987f395990f17e35c2cd9ae46e1c566a6e6.scope: Deactivated successfully.
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.234 351492 DEBUG oslo_concurrency.lockutils [req-4bce65db-28c8-4671-9571-c7ae62546bf2 req-d8dc52c7-a091-4879-9a5e-0109ceb1d6f4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.250 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.318 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dz43iat" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.380 351492 DEBUG nova.storage.rbd_utils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] rbd image 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.398 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.413195523 +0000 UTC m=+0.069241946 container create eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.450 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.451 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.459 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.459 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.380882961 +0000 UTC m=+0.036929414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:22 compute-0 systemd[1]: Started libpod-conmon-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope.
Dec  3 02:22:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.566813952 +0000 UTC m=+0.222860445 container init eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.593269909 +0000 UTC m=+0.249316322 container start eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 02:22:22 compute-0 podman[456779]: 2025-12-03 02:22:22.598301321 +0000 UTC m=+0.254347814 container attach eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.760 351492 DEBUG oslo_concurrency.processutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config 4fb8fc07-d7b7-4be8-94da-155b040faf32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.761 351492 INFO nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deleting local config drive /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.config because it was imported into RBD.#033[00m
Dec  3 02:22:22 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  3 02:22:22 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  3 02:22:22 compute-0 kernel: tap94fdb5b9-66: entered promiscuous mode
Dec  3 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00191|binding|INFO|Claiming lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c for this chassis.
Dec  3 02:22:22 compute-0 NetworkManager[48912]: <info>  [1764728542.8892] manager: (tap94fdb5b9-66): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.891 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00192|binding|INFO|94fdb5b9-66bf-4e81-b411-064b08e4c71c: Claiming fa:16:3e:3f:0c:ae 10.100.1.46
Dec  3 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.914 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0c:ae 10.100.1.46'], port_security=['fa:16:3e:3f:0c:ae 10.100.1.46'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.46/16', 'neutron:device_id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=94fdb5b9-66bf-4e81-b411-064b08e4c71c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.915 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 94fdb5b9-66bf-4e81-b411-064b08e4c71c in datapath a7615b73-b987-4b91-b12c-2d7488085657 bound to our chassis#033[00m
Dec  3 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.918 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7615b73-b987-4b91-b12c-2d7488085657#033[00m
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00193|binding|INFO|Setting lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c ovn-installed in OVS
Dec  3 02:22:22 compute-0 ovn_controller[89134]: 2025-12-03T02:22:22Z|00194|binding|INFO|Setting lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c up in Southbound
Dec  3 02:22:22 compute-0 nova_compute[351485]: 2025-12-03 02:22:22.928 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:22 compute-0 systemd-machined[138558]: New machine qemu-16-instance-0000000f.
Dec  3 02:22:22 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec  3 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.946 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[93b1f415-013f-4d2f-b6fc-a68f4479cc0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:22:22 compute-0 systemd-udevd[456869]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 02:22:22 compute-0 NetworkManager[48912]: <info>  [1764728542.9737] device (tap94fdb5b9-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 02:22:22 compute-0 NetworkManager[48912]: <info>  [1764728542.9744] device (tap94fdb5b9-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  3 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.984 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[61d0e853-0882-4592-9d1f-b885f7acbab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:22:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:22.988 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[ba185987-077d-4a14-b424-0fa37ec93e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:22:22 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.014 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c5ec37-b77c-419b-a4cb-aad635780709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:22:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.033 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[99489513-c758-41c6-b955-88971cd22de6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 32339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 456899, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.046 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[e76d2a00-b4a6-4fda-9e2c-1e471a10d7b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719227, 'tstamp': 719227}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 456901, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719234, 'tstamp': 719234}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 456901, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.048 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.050 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.051 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7615b73-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.052 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.053 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7615b73-b0, col_values=(('external_ids', {'iface-id': '50c454e1-4a4b-4aad-b47b-dafc7b079018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:22:23 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:23.053 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.062 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.063 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3777MB free_disk=59.92206954956055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.063 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.063 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.302 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.302 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.313 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.314 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.329 351492 DEBUG nova.compute.manager [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.330 351492 DEBUG oslo_concurrency.lockutils [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.330 351492 DEBUG oslo_concurrency.lockutils [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.331 351492 DEBUG oslo_concurrency.lockutils [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.331 351492 DEBUG nova.compute.manager [req-754d59b3-df68-42e3-8305-ed4d1266388b req-865611ad-2f01-4ef6-bee0-6448641a24f1 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Processing event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  3 02:22:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.495 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.816 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728543.8038168, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.818 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Started (Lifecycle Event)#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.830 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.832 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.842 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.847 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.863 351492 INFO nova.virt.libvirt.driver [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance spawned successfully.#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.864 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.871 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.894 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.895 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728543.8040407, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.896 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Paused (Lifecycle Event)#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.905 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.906 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.907 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.907 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.908 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.909 351492 DEBUG nova.virt.libvirt.driver [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.915 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.922 351492 DEBUG nova.virt.driver [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] Emitting event <LifecycleEvent: 1764728543.8426242, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.922 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Resumed (Lifecycle Event)#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.951 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.959 351492 DEBUG nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.969 351492 INFO nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 10.41 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.970 351492 DEBUG nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:22:23 compute-0 nova_compute[351485]: 2025-12-03 02:22:23.982 351492 INFO nova.compute.manager [None req-7003d391-3f37-40a4-8a52-6cda59e1e931 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 02:22:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:22:23 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224194113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.013 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.022 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.036 351492 INFO nova.compute.manager [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 11.55 seconds to build instance.#033[00m
Dec  3 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.039 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.054 351492 DEBUG oslo_concurrency.lockutils [None req-2e3a6978-67fe-4ba0-a5eb-2fa042e55714 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.066 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:22:24 compute-0 nova_compute[351485]: 2025-12-03 02:22:24.066 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:24 compute-0 podman[458778]: 2025-12-03 02:22:24.844891128 +0000 UTC m=+0.101160048 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]: [
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:    {
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "available": false,
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "ceph_device": false,
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "lsm_data": {},
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "lvs": [],
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "path": "/dev/sr0",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "rejected_reasons": [
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "Has a FileSystem",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "Insufficient space (<5GB)"
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        ],
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        "sys_api": {
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "actuators": null,
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "device_nodes": "sr0",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "devname": "sr0",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "human_readable_size": "482.00 KB",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "id_bus": "ata",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "model": "QEMU DVD-ROM",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "nr_requests": "2",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "parent": "/dev/sr0",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "partitions": {},
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "path": "/dev/sr0",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "removable": "1",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "rev": "2.5+",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "ro": "0",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "rotational": "1",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "sas_address": "",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "sas_device_handle": "",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "scheduler_mode": "mq-deadline",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "sectors": 0,
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "sectorsize": "2048",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "size": 493568.0,
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "support_discard": "2048",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "type": "disk",
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:            "vendor": "QEMU"
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:        }
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]:    }
Dec  3 02:22:25 compute-0 relaxed_ritchie[456819]: ]
Dec  3 02:22:25 compute-0 systemd[1]: libpod-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope: Deactivated successfully.
Dec  3 02:22:25 compute-0 systemd[1]: libpod-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope: Consumed 2.429s CPU time.
Dec  3 02:22:25 compute-0 podman[456779]: 2025-12-03 02:22:25.093127689 +0000 UTC m=+2.749174132 container died eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 02:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa9c5684d222edcb509f7a9d659d2314a89f2b8fe60579721a24009b4b8dedf3-merged.mount: Deactivated successfully.
Dec  3 02:22:25 compute-0 podman[456779]: 2025-12-03 02:22:25.186473035 +0000 UTC m=+2.842519458 container remove eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:22:25 compute-0 systemd[1]: libpod-conmon-eca1961e9773c0eccafe4031ac799514c763b7307fc4f5045af1259a2e7bee0b.scope: Deactivated successfully.
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 99640c76-3d1d-42b7-8a84-9c00bf393bd8 does not exist
Dec  3 02:22:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 09986305-7b01-4753-a627-6e7b1b021551 does not exist
Dec  3 02:22:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec  3 02:22:25 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 939dc9d4-9492-48ca-91c5-7dee0d152285 does not exist
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:22:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:22:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.210 351492 DEBUG nova.compute.manager [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.211 351492 DEBUG oslo_concurrency.lockutils [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 DEBUG oslo_concurrency.lockutils [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 DEBUG oslo_concurrency.lockutils [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 DEBUG nova.compute.manager [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] No waiting events found dispatching network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.212 351492 WARNING nova.compute.manager [req-15047d89-d305-4d38-a56b-5c7c9f4e8465 req-d2436710-2d58-4033-ad74-6995ed78c7d0 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received unexpected event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c for instance with vm_state active and task_state None.#033[00m
Dec  3 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:26 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:22:26 compute-0 podman[459619]: 2025-12-03 02:22:26.292055989 +0000 UTC m=+0.055672403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:26 compute-0 nova_compute[351485]: 2025-12-03 02:22:26.439 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:26 compute-0 podman[459619]: 2025-12-03 02:22:26.903161827 +0000 UTC m=+0.666778281 container create e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 02:22:26 compute-0 systemd[1]: Started libpod-conmon-e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f.scope.
Dec  3 02:22:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.052124794 +0000 UTC m=+0.815741288 container init e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.067284872 +0000 UTC m=+0.830901286 container start e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:22:27 compute-0 charming_mirzakhani[459635]: 167 167
Dec  3 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.074321771 +0000 UTC m=+0.837938265 container attach e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:22:27 compute-0 systemd[1]: libpod-e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f.scope: Deactivated successfully.
Dec  3 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.078318294 +0000 UTC m=+0.841934778 container died e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  3 02:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd10248e8c6d91c4866ce4de440da3741003989a81f156c7f5cb2876b15602e5-merged.mount: Deactivated successfully.
Dec  3 02:22:27 compute-0 podman[459619]: 2025-12-03 02:22:27.138693529 +0000 UTC m=+0.902309933 container remove e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:22:27 compute-0 systemd[1]: libpod-conmon-e20da72c1463117dd6f44037e0ebdd80316b87eb3d3cc97efa02c8e8f1ee5e5f.scope: Deactivated successfully.
Dec  3 02:22:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 1.7 MiB/s wr, 40 op/s
Dec  3 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.415755684 +0000 UTC m=+0.073084995 container create b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.382641718 +0000 UTC m=+0.039971059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:27 compute-0 systemd[1]: Started libpod-conmon-b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845.scope.
Dec  3 02:22:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.539459817 +0000 UTC m=+0.196789158 container init b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.548293367 +0000 UTC m=+0.205622658 container start b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:22:27 compute-0 podman[459657]: 2025-12-03 02:22:27.556627702 +0000 UTC m=+0.213957073 container attach b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.781 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.782 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.788 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:22:27 compute-0 nova_compute[351485]: 2025-12-03 02:22:27.789 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:22:28 compute-0 podman[459680]: 2025-12-03 02:22:28.36599839 +0000 UTC m=+0.128166450 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, config_id=edpm, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350)
Dec  3 02:22:28 compute-0 podman[459681]: 2025-12-03 02:22:28.374512441 +0000 UTC m=+0.121674537 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:22:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:28 compute-0 podman[459688]: 2025-12-03 02:22:28.399873817 +0000 UTC m=+0.143197105 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:22:28 compute-0 podman[459683]: 2025-12-03 02:22:28.406411442 +0000 UTC m=+0.160042711 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release=1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, version=9.4, io.buildah.version=1.29.0, architecture=x86_64, distribution-scope=public)
Dec  3 02:22:28 compute-0 podman[459679]: 2025-12-03 02:22:28.40884363 +0000 UTC m=+0.162155620 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:22:28
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.control']
Dec  3 02:22:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:22:28 compute-0 determined_hellman[459673]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:22:28 compute-0 determined_hellman[459673]: --> relative data size: 1.0
Dec  3 02:22:28 compute-0 determined_hellman[459673]: --> All data devices are unavailable
Dec  3 02:22:28 compute-0 systemd[1]: libpod-b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845.scope: Deactivated successfully.
Dec  3 02:22:28 compute-0 podman[459657]: 2025-12-03 02:22:28.668829393 +0000 UTC m=+1.326158684 container died b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bcf6ba284673f9b1b878bb1476e3b6ee0b41896be67c48e4d19b1a96a6dd6d0-merged.mount: Deactivated successfully.
Dec  3 02:22:28 compute-0 podman[459657]: 2025-12-03 02:22:28.752919098 +0000 UTC m=+1.410248389 container remove b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:22:28 compute-0 systemd[1]: libpod-conmon-b5cd6aec82e1d84a3fab141801b0a092ad7d20a058e2b83815e05ac653284845.scope: Deactivated successfully.
Dec  3 02:22:28 compute-0 nova_compute[351485]: 2025-12-03 02:22:28.831 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:22:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 867 KiB/s wr, 54 op/s
Dec  3 02:22:29 compute-0 podman[158098]: time="2025-12-03T02:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:22:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.833 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.861 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.862 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.862 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.863 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:22:29 compute-0 nova_compute[351485]: 2025-12-03 02:22:29.879 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:22:29 compute-0 podman[459953]: 2025-12-03 02:22:29.964186086 +0000 UTC m=+0.081208154 container create aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:29.929733013 +0000 UTC m=+0.046755151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:30 compute-0 systemd[1]: Started libpod-conmon-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope.
Dec  3 02:22:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.116992331 +0000 UTC m=+0.234014439 container init aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.133084295 +0000 UTC m=+0.250106363 container start aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.139397664 +0000 UTC m=+0.256419742 container attach aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:22:30 compute-0 cranky_mccarthy[459969]: 167 167
Dec  3 02:22:30 compute-0 systemd[1]: libpod-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope: Deactivated successfully.
Dec  3 02:22:30 compute-0 conmon[459969]: conmon aae43f3dc4d0f4d869b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope/container/memory.events
Dec  3 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.14918015 +0000 UTC m=+0.266202188 container died aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd26cf534e94ec7ed332ba72c0867304a3df0459b536edf29f498185aec5e81-merged.mount: Deactivated successfully.
Dec  3 02:22:30 compute-0 podman[459953]: 2025-12-03 02:22:30.200652714 +0000 UTC m=+0.317674742 container remove aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:22:30 compute-0 systemd[1]: libpod-conmon-aae43f3dc4d0f4d869b28cfbb1a4df2ea9c0ea6e701c8095157e42c3c6b460ce.scope: Deactivated successfully.
Dec  3 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.463987491 +0000 UTC m=+0.074506006 container create eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.433969323 +0000 UTC m=+0.044487848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:30 compute-0 systemd[1]: Started libpod-conmon-eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677.scope.
Dec  3 02:22:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.639910249 +0000 UTC m=+0.250428764 container init eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.660043918 +0000 UTC m=+0.270562453 container start eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:22:30 compute-0 podman[459993]: 2025-12-03 02:22:30.667692864 +0000 UTC m=+0.278211369 container attach eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:22:30 compute-0 nova_compute[351485]: 2025-12-03 02:22:30.873 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:30 compute-0 nova_compute[351485]: 2025-12-03 02:22:30.875 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Dec  3 02:22:31 compute-0 gracious_raman[460009]: {
Dec  3 02:22:31 compute-0 gracious_raman[460009]:    "0": [
Dec  3 02:22:31 compute-0 gracious_raman[460009]:        {
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "devices": [
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "/dev/loop3"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            ],
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_name": "ceph_lv0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_size": "21470642176",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "name": "ceph_lv0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "tags": {
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cluster_name": "ceph",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.crush_device_class": "",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.encrypted": "0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osd_id": "0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.type": "block",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.vdo": "0"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            },
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "type": "block",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "vg_name": "ceph_vg0"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:        }
Dec  3 02:22:31 compute-0 gracious_raman[460009]:    ],
Dec  3 02:22:31 compute-0 gracious_raman[460009]:    "1": [
Dec  3 02:22:31 compute-0 gracious_raman[460009]:        {
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "devices": [
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "/dev/loop4"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            ],
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_name": "ceph_lv1",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_size": "21470642176",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "name": "ceph_lv1",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "tags": {
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cluster_name": "ceph",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.crush_device_class": "",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.encrypted": "0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osd_id": "1",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.type": "block",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.vdo": "0"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            },
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "type": "block",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "vg_name": "ceph_vg1"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:        }
Dec  3 02:22:31 compute-0 gracious_raman[460009]:    ],
Dec  3 02:22:31 compute-0 gracious_raman[460009]:    "2": [
Dec  3 02:22:31 compute-0 gracious_raman[460009]:        {
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "devices": [
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "/dev/loop5"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            ],
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_name": "ceph_lv2",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_size": "21470642176",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "name": "ceph_lv2",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "tags": {
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.cluster_name": "ceph",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.crush_device_class": "",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.encrypted": "0",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osd_id": "2",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.type": "block",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:                "ceph.vdo": "0"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            },
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "type": "block",
Dec  3 02:22:31 compute-0 gracious_raman[460009]:            "vg_name": "ceph_vg2"
Dec  3 02:22:31 compute-0 gracious_raman[460009]:        }
Dec  3 02:22:31 compute-0 gracious_raman[460009]:    ]
Dec  3 02:22:31 compute-0 gracious_raman[460009]: }
Dec  3 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:22:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:22:31 compute-0 openstack_network_exporter[368278]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:22:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:22:31 compute-0 nova_compute[351485]: 2025-12-03 02:22:31.441 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:31 compute-0 systemd[1]: libpod-eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677.scope: Deactivated successfully.
Dec  3 02:22:31 compute-0 podman[459993]: 2025-12-03 02:22:31.455805582 +0000 UTC m=+1.066324067 container died eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 02:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ce9c0a61696efa7d4491b6a6868dd7d3aaeadf47a9f47aea28f471b2962a435-merged.mount: Deactivated successfully.
Dec  3 02:22:31 compute-0 podman[459993]: 2025-12-03 02:22:31.524648946 +0000 UTC m=+1.135167431 container remove eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:22:31 compute-0 systemd[1]: libpod-conmon-eb31fd14505ab82af39b34b85330e226184023c22c0a0cd4790e2592fdf1e677.scope: Deactivated successfully.
Dec  3 02:22:32 compute-0 nova_compute[351485]: 2025-12-03 02:22:32.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.666064932 +0000 UTC m=+0.083651244 container create e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:22:32 compute-0 systemd[1]: Started libpod-conmon-e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27.scope.
Dec  3 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.638150223 +0000 UTC m=+0.055736515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:32 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.80940119 +0000 UTC m=+0.226987512 container init e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.834045376 +0000 UTC m=+0.251631678 container start e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 02:22:32 compute-0 dreamy_wilbur[460179]: 167 167
Dec  3 02:22:32 compute-0 systemd[1]: libpod-e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27.scope: Deactivated successfully.
Dec  3 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.842800283 +0000 UTC m=+0.260386565 container attach e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.843426761 +0000 UTC m=+0.261013043 container died e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-69d74fce5f5f8ac84c1888ef089f27766c244bc02548f7c7475c88007fd463e0-merged.mount: Deactivated successfully.
Dec  3 02:22:32 compute-0 podman[460163]: 2025-12-03 02:22:32.906716158 +0000 UTC m=+0.324302450 container remove e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:22:32 compute-0 systemd[1]: libpod-conmon-e3b74825a8ab3df28d74534c2c0a87b8f2b3fb6c113c3c3a5be8a9cc81db5d27.scope: Deactivated successfully.
Dec  3 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.166383642 +0000 UTC m=+0.088001467 container create b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.139034059 +0000 UTC m=+0.060651924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:22:33 compute-0 systemd[1]: Started libpod-conmon-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope.
Dec  3 02:22:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec  3 02:22:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.314035942 +0000 UTC m=+0.235653787 container init b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.338557304 +0000 UTC m=+0.260175129 container start b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 02:22:33 compute-0 podman[460202]: 2025-12-03 02:22:33.342954848 +0000 UTC m=+0.264572703 container attach b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  3 02:22:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:33 compute-0 nova_compute[351485]: 2025-12-03 02:22:33.843 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]: {
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "osd_id": 2,
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "type": "bluestore"
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:    },
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "osd_id": 1,
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "type": "bluestore"
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:    },
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "osd_id": 0,
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:        "type": "bluestore"
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]:    }
Dec  3 02:22:34 compute-0 wizardly_tesla[460218]: }
Dec  3 02:22:34 compute-0 systemd[1]: libpod-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope: Deactivated successfully.
Dec  3 02:22:34 compute-0 systemd[1]: libpod-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope: Consumed 1.177s CPU time.
Dec  3 02:22:34 compute-0 conmon[460218]: conmon b206e31a98ecee480b4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope/container/memory.events
Dec  3 02:22:34 compute-0 podman[460202]: 2025-12-03 02:22:34.532177463 +0000 UTC m=+1.453795328 container died b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 02:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7b35f913b3c9c08e4bcbd4ead20275f6ba940ccf1f1735535a2c2a9124a2897-merged.mount: Deactivated successfully.
Dec  3 02:22:34 compute-0 podman[460202]: 2025-12-03 02:22:34.667171666 +0000 UTC m=+1.588789511 container remove b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 02:22:34 compute-0 systemd[1]: libpod-conmon-b206e31a98ecee480b4d0f0063c464043b57fcc81a944dc02a78ca831ed3d57f.scope: Deactivated successfully.
Dec  3 02:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:22:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:22:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 203c3763-a4e5-4937-bf19-b3ab19ef4ce0 does not exist
Dec  3 02:22:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b6c4a24e-3099-4ced-ba9f-2738a6b26ead does not exist
Dec  3 02:22:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec  3 02:22:35 compute-0 nova_compute[351485]: 2025-12-03 02:22:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:35 compute-0 nova_compute[351485]: 2025-12-03 02:22:35.579 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:22:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:35 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:22:36 compute-0 nova_compute[351485]: 2025-12-03 02:22:36.443 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:36 compute-0 nova_compute[351485]: 2025-12-03 02:22:36.587 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:36 compute-0 nova_compute[351485]: 2025-12-03 02:22:36.587 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:22:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 64 op/s
Dec  3 02:22:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:38 compute-0 nova_compute[351485]: 2025-12-03 02:22:38.842 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011065419067851794 of space, bias 1.0, pg target 0.33196257203555385 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:22:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:22:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Dec  3 02:22:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 677 KiB/s rd, 22 op/s
Dec  3 02:22:41 compute-0 nova_compute[351485]: 2025-12-03 02:22:41.446 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:43 compute-0 nova_compute[351485]: 2025-12-03 02:22:43.845 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:46 compute-0 nova_compute[351485]: 2025-12-03 02:22:46.451 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:22:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516732987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:22:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:22:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516732987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:22:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:48 compute-0 nova_compute[351485]: 2025-12-03 02:22:48.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:48 compute-0 podman[460316]: 2025-12-03 02:22:48.87823455 +0000 UTC m=+0.114416022 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:22:48 compute-0 podman[460314]: 2025-12-03 02:22:48.882994414 +0000 UTC m=+0.123867529 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:22:48 compute-0 podman[460315]: 2025-12-03 02:22:48.918862637 +0000 UTC m=+0.156044148 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  3 02:22:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:51 compute-0 nova_compute[351485]: 2025-12-03 02:22:51.454 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:52 compute-0 ovn_controller[89134]: 2025-12-03T02:22:52Z|00195|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Dec  3 02:22:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:53 compute-0 nova_compute[351485]: 2025-12-03 02:22:53.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:55 compute-0 podman[460373]: 2025-12-03 02:22:55.899150302 +0000 UTC m=+0.142900467 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:22:56 compute-0 nova_compute[351485]: 2025-12-03 02:22:56.458 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:56 compute-0 nova_compute[351485]: 2025-12-03 02:22:56.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:22:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:22:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:22:58 compute-0 nova_compute[351485]: 2025-12-03 02:22:58.852 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:22:58 compute-0 podman[460395]: 2025-12-03 02:22:58.860558147 +0000 UTC m=+0.093505372 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:22:58 compute-0 podman[460393]: 2025-12-03 02:22:58.887435406 +0000 UTC m=+0.138684538 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:22:58 compute-0 podman[460396]: 2025-12-03 02:22:58.889479184 +0000 UTC m=+0.095410976 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, vcs-type=git, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 02:22:58 compute-0 podman[460407]: 2025-12-03 02:22:58.895048991 +0000 UTC m=+0.130101945 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 02:22:58 compute-0 podman[460394]: 2025-12-03 02:22:58.904329813 +0000 UTC m=+0.158643441 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  3 02:22:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:22:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:59.656 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:22:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:59.656 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:22:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:22:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:22:59 compute-0 podman[158098]: time="2025-12-03T02:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:22:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  3 02:23:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 221 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Dec  3 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:23:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:23:01 compute-0 openstack_network_exporter[368278]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:23:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:23:01 compute-0 ovn_controller[89134]: 2025-12-03T02:23:01Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:0c:ae 10.100.1.46
Dec  3 02:23:01 compute-0 ovn_controller[89134]: 2025-12-03T02:23:01Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:0c:ae 10.100.1.46
Dec  3 02:23:01 compute-0 nova_compute[351485]: 2025-12-03 02:23:01.464 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 221 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Dec  3 02:23:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:03 compute-0 nova_compute[351485]: 2025-12-03 02:23:03.855 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 235 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Dec  3 02:23:06 compute-0 nova_compute[351485]: 2025-12-03 02:23:06.467 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  3 02:23:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:08 compute-0 nova_compute[351485]: 2025-12-03 02:23:08.860 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  3 02:23:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec  3 02:23:11 compute-0 nova_compute[351485]: 2025-12-03 02:23:11.471 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 402 KiB/s wr, 13 op/s
Dec  3 02:23:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:13 compute-0 nova_compute[351485]: 2025-12-03 02:23:13.864 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:14 compute-0 nova_compute[351485]: 2025-12-03 02:23:14.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 402 KiB/s wr, 13 op/s
Dec  3 02:23:16 compute-0 nova_compute[351485]: 2025-12-03 02:23:16.476 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 76 KiB/s wr, 8 op/s
Dec  3 02:23:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:18 compute-0 nova_compute[351485]: 2025-12-03 02:23:18.867 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.512 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.513 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.513 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.522 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  3 02:23:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:19.524 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5774f494984a65ffbde2426a05531a474fe014ea4dcd597248cb0a9b623a789b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  3 02:23:19 compute-0 podman[460495]: 2025-12-03 02:23:19.867077487 +0000 UTC m=+0.095106167 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:23:19 compute-0 podman[460493]: 2025-12-03 02:23:19.875179126 +0000 UTC m=+0.118168688 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  3 02:23:19 compute-0 podman[460494]: 2025-12-03 02:23:19.887394601 +0000 UTC m=+0.121606005 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 02:23:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.4 KiB/s wr, 0 op/s
Dec  3 02:23:21 compute-0 nova_compute[351485]: 2025-12-03 02:23:21.480 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.831 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Wed, 03 Dec 2025 02:23:19 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2f4f25fb-399c-406b-9246-7ca842c22f00 x-openstack-request-id: req-2f4f25fb-399c-406b-9246-7ca842c22f00 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.831 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4fb8fc07-d7b7-4be8-94da-155b040faf32", "name": "te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q", "status": "ACTIVE", "tenant_id": "63f39ac2863946b8b817457e689ff933", "user_id": "8f61f44789494541b7c101b0fdab52f0", "metadata": {"metering.server_group": "38bfb145-4971-41b6-9bc3-faf3c3931019"}, "hostId": "b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2", "image": {"id": "8876482c-db67-48c0-9203-60685152fc9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8876482c-db67-48c0-9203-60685152fc9d"}]}, "flavor": {"id": "89219634-32e9-4cb5-896f-6fa0b1edfe13", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/89219634-32e9-4cb5-896f-6fa0b1edfe13"}]}, "created": "2025-12-03T02:22:10Z", "updated": "2025-12-03T02:22:24Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.46", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:0c:ae"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-03T02:22:23.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.831 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4fb8fc07-d7b7-4be8-94da-155b040faf32 used request id req-2f4f25fb-399c-406b-9246-7ca842c22f00 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.833 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.839 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.840 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:23:21.841144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.883 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 43.5703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.918 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 43.4296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:23:21.919304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.924 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4fb8fc07-d7b7-4be8-94da-155b040faf32 / tap94fdb5b9-66 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.924 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.928 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.930 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.930 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:23:21.929998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.932 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.934 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.934 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.935 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:23:21.932236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:23:21.933870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:23:21.935428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.937 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:23:21.937514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.951 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.952 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.971 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.971 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.972 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.972 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.973 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>]
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-03T02:23:21.973283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:21 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:21.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:23:21.974820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.008 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.008 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.054 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 30342144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.055 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.056 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.057 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:23:22.056799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3251057957 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 228292831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.059 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2892253301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.060 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 193523124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:23:22.058970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.061 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.062 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.062 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.062 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.064 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.064 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:23:22.061614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:23:22.063970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.065 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.066 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.066 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.066 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:23:22.065758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.068 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 72790016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.069 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.069 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.069 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 8474740037 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:23:22.068667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:23:22.071163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.071 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.072 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 9924409915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.072 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 313 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.074 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:23:22.074096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:23:22.076289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 55040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:23:22.078044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.078 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 243990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:23:22.079907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.081 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.083 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.084 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.084 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.084 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.086 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.086 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:23:22.081273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:23:22.083215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:23:22.086030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:23:22.087713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.092 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q>]
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:23:22.090892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:22 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:23:22.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-03T02:23:22.092292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:23:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:23:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.608 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.609 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.610 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.611 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:23:23 compute-0 nova_compute[351485]: 2025-12-03 02:23:23.871 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:23:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508921091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.122 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.257 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.258 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.268 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.268 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.746 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.747 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.897377014160156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.747 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.748 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.841 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.842 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.843 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.843 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:23:24 compute-0 nova_compute[351485]: 2025-12-03 02:23:24.908 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:23:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:23:25 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:23:25 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895294831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.430 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.445 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.471 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.500 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:23:25 compute-0 nova_compute[351485]: 2025-12-03 02:23:25.501 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:23:26 compute-0 nova_compute[351485]: 2025-12-03 02:23:26.482 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:26 compute-0 podman[460595]: 2025-12-03 02:23:26.86993979 +0000 UTC m=+0.109768151 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:23:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:23:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:23:28
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec  3 02:23:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.504 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.505 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.837 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.838 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.838 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:23:28 compute-0 nova_compute[351485]: 2025-12-03 02:23:28.875 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:23:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:23:29 compute-0 podman[158098]: time="2025-12-03T02:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:23:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  3 02:23:29 compute-0 podman[460618]: 2025-12-03 02:23:29.894278692 +0000 UTC m=+0.106802417 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, release=1214.1726694543, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_id=edpm, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 02:23:29 compute-0 podman[460617]: 2025-12-03 02:23:29.900848408 +0000 UTC m=+0.120618428 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:23:29 compute-0 podman[460616]: 2025-12-03 02:23:29.906503007 +0000 UTC m=+0.114612018 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, vcs-type=git, distribution-scope=public)
Dec  3 02:23:29 compute-0 podman[460615]: 2025-12-03 02:23:29.929312141 +0000 UTC m=+0.172342268 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:23:29 compute-0 podman[460625]: 2025-12-03 02:23:29.931288927 +0000 UTC m=+0.141974160 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:23:29 compute-0 nova_compute[351485]: 2025-12-03 02:23:29.981 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.002 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.003 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.003 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.004 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:30 compute-0 nova_compute[351485]: 2025-12-03 02:23:30.004 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:31 compute-0 nova_compute[351485]: 2025-12-03 02:23:31.078 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  3 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:23:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:23:31 compute-0 openstack_network_exporter[368278]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:23:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:23:31 compute-0 nova_compute[351485]: 2025-12-03 02:23:31.485 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:33 compute-0 nova_compute[351485]: 2025-12-03 02:23:33.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:33 compute-0 nova_compute[351485]: 2025-12-03 02:23:33.878 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:23:35 compute-0 nova_compute[351485]: 2025-12-03 02:23:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:35 compute-0 nova_compute[351485]: 2025-12-03 02:23:35.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:23:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:23:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:23:35 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:36 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:36 compute-0 nova_compute[351485]: 2025-12-03 02:23:36.488 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:37 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d5bca3e3-ad1d-4906-8aae-62cb97d2b368 does not exist
Dec  3 02:23:37 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2c6330b8-7ae7-4dbb-8542-446dc2ad2888 does not exist
Dec  3 02:23:37 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 89421ddb-8ec5-4cf7-abb3-1443ad240d28 does not exist
Dec  3 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:23:37 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:23:37 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:23:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:37 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.109270336 +0000 UTC m=+0.104821961 container create 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.074603717 +0000 UTC m=+0.070155402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:23:38 compute-0 systemd[1]: Started libpod-conmon-2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9.scope.
Dec  3 02:23:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.282340534 +0000 UTC m=+0.277892199 container init 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.294268251 +0000 UTC m=+0.289819846 container start 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.298757288 +0000 UTC m=+0.294308923 container attach 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:23:38 compute-0 mystifying_varahamihira[461117]: 167 167
Dec  3 02:23:38 compute-0 systemd[1]: libpod-2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9.scope: Deactivated successfully.
Dec  3 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.307929437 +0000 UTC m=+0.303481102 container died 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  3 02:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ed74f22f13aaef893214ec30b0ec597e1044d8dc3477466cd5e27cba8fcb6c1-merged.mount: Deactivated successfully.
Dec  3 02:23:38 compute-0 podman[461101]: 2025-12-03 02:23:38.387173975 +0000 UTC m=+0.382725590 container remove 2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:23:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:38 compute-0 systemd[1]: libpod-conmon-2bd9accad92fd8d4593caa0b73e904f7856839459fef7d28fc0bc4402891b3e9.scope: Deactivated successfully.
Dec  3 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.632643218 +0000 UTC m=+0.083063497 container create 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.598408571 +0000 UTC m=+0.048828860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:23:38 compute-0 systemd[1]: Started libpod-conmon-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope.
Dec  3 02:23:38 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.803326308 +0000 UTC m=+0.253746567 container init 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.841735403 +0000 UTC m=+0.292155652 container start 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:23:38 compute-0 podman[461140]: 2025-12-03 02:23:38.850461489 +0000 UTC m=+0.300881768 container attach 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:23:38 compute-0 nova_compute[351485]: 2025-12-03 02:23:38.883 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015153665673634173 of space, bias 1.0, pg target 0.45460997020902516 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:23:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:23:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:23:40 compute-0 vibrant_poitras[461156]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:23:40 compute-0 vibrant_poitras[461156]: --> relative data size: 1.0
Dec  3 02:23:40 compute-0 vibrant_poitras[461156]: --> All data devices are unavailable
Dec  3 02:23:40 compute-0 systemd[1]: libpod-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope: Deactivated successfully.
Dec  3 02:23:40 compute-0 systemd[1]: libpod-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope: Consumed 1.231s CPU time.
Dec  3 02:23:40 compute-0 podman[461140]: 2025-12-03 02:23:40.145620797 +0000 UTC m=+1.596041066 container died 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 02:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-28806bee26cfd119ceefcb052431138df89a6ee8762faf231d2a4b641eb65282-merged.mount: Deactivated successfully.
Dec  3 02:23:40 compute-0 podman[461140]: 2025-12-03 02:23:40.23034277 +0000 UTC m=+1.680763019 container remove 3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poitras, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:23:40 compute-0 systemd[1]: libpod-conmon-3b580c9b2f124e31e3fe19180213162be5f96112947f79b994da2150b1c72804.scope: Deactivated successfully.
Dec  3 02:23:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.334156054 +0000 UTC m=+0.089203491 container create 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.293986409 +0000 UTC m=+0.049033916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:23:41 compute-0 systemd[1]: Started libpod-conmon-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope.
Dec  3 02:23:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.477521322 +0000 UTC m=+0.232568819 container init 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.493511244 +0000 UTC m=+0.248558701 container start 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:23:41 compute-0 nova_compute[351485]: 2025-12-03 02:23:41.491 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:41 compute-0 relaxed_williamson[461352]: 167 167
Dec  3 02:23:41 compute-0 systemd[1]: libpod-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope: Deactivated successfully.
Dec  3 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.501614353 +0000 UTC m=+0.256661880 container attach 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:23:41 compute-0 conmon[461352]: conmon 974a76abc840646a8ce0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope/container/memory.events
Dec  3 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.503070304 +0000 UTC m=+0.258117741 container died 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e45b73091bba86ead5a9b2acde63f1d6936662990cb43901836afa4f0b26b6-merged.mount: Deactivated successfully.
Dec  3 02:23:41 compute-0 podman[461336]: 2025-12-03 02:23:41.559144938 +0000 UTC m=+0.314192355 container remove 974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:23:41 compute-0 systemd[1]: libpod-conmon-974a76abc840646a8ce084e4d54c7c444e43a9e9a24b3fdda217b9343045ce88.scope: Deactivated successfully.
Dec  3 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.801632005 +0000 UTC m=+0.068788494 container create 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.775154877 +0000 UTC m=+0.042311376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:23:41 compute-0 systemd[1]: Started libpod-conmon-356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c.scope.
Dec  3 02:23:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.947180085 +0000 UTC m=+0.214336664 container init 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.966188692 +0000 UTC m=+0.233345181 container start 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:23:41 compute-0 podman[461378]: 2025-12-03 02:23:41.971747209 +0000 UTC m=+0.238903718 container attach 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]: {
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:    "0": [
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:        {
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "devices": [
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "/dev/loop3"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            ],
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_name": "ceph_lv0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_size": "21470642176",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "name": "ceph_lv0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "tags": {
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cluster_name": "ceph",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.crush_device_class": "",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.encrypted": "0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osd_id": "0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.type": "block",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.vdo": "0"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            },
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "type": "block",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "vg_name": "ceph_vg0"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:        }
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:    ],
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:    "1": [
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:        {
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "devices": [
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "/dev/loop4"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            ],
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_name": "ceph_lv1",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_size": "21470642176",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "name": "ceph_lv1",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "tags": {
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cluster_name": "ceph",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.crush_device_class": "",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.encrypted": "0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osd_id": "1",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.type": "block",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.vdo": "0"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            },
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "type": "block",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "vg_name": "ceph_vg1"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:        }
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:    ],
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:    "2": [
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:        {
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "devices": [
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "/dev/loop5"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            ],
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_name": "ceph_lv2",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_size": "21470642176",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "name": "ceph_lv2",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "tags": {
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.cluster_name": "ceph",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.crush_device_class": "",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.encrypted": "0",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osd_id": "2",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.type": "block",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:                "ceph.vdo": "0"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            },
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "type": "block",
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:            "vg_name": "ceph_vg2"
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:        }
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]:    ]
Dec  3 02:23:42 compute-0 boring_mcclintock[461394]: }
Dec  3 02:23:42 compute-0 systemd[1]: libpod-356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c.scope: Deactivated successfully.
Dec  3 02:23:42 compute-0 podman[461403]: 2025-12-03 02:23:42.93631047 +0000 UTC m=+0.090348342 container died 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-133f55651ec14128b0359a21a370b89bdf3f5d3d2355a8db9f2e44476f2b9756-merged.mount: Deactivated successfully.
Dec  3 02:23:43 compute-0 podman[461403]: 2025-12-03 02:23:43.049373434 +0000 UTC m=+0.203411236 container remove 356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:23:43 compute-0 systemd[1]: libpod-conmon-356365b9a426596db9a32654cff4d06d62fa32409ab1bb1ef8deaad545315f2c.scope: Deactivated successfully.
Dec  3 02:23:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:23:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:43 compute-0 nova_compute[351485]: 2025-12-03 02:23:43.886 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.259967803 +0000 UTC m=+0.068650150 container create 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:23:44 compute-0 systemd[1]: Started libpod-conmon-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope.
Dec  3 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.231673924 +0000 UTC m=+0.040356261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:23:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.378300305 +0000 UTC m=+0.186982692 container init 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.388102852 +0000 UTC m=+0.196785159 container start 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 02:23:44 compute-0 podman[461556]: 2025-12-03 02:23:44.39300976 +0000 UTC m=+0.201692097 container attach 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 02:23:44 compute-0 practical_nash[461572]: 167 167
Dec  3 02:23:44 compute-0 systemd[1]: libpod-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope: Deactivated successfully.
Dec  3 02:23:44 compute-0 conmon[461572]: conmon 9f9e4d33458530b67473 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope/container/memory.events
Dec  3 02:23:44 compute-0 podman[461577]: 2025-12-03 02:23:44.458921542 +0000 UTC m=+0.041237266 container died 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f25bab1a26f2b77a3b36b0e090dcf1e5a35fb8032f976dc93a1023b98113be68-merged.mount: Deactivated successfully.
Dec  3 02:23:44 compute-0 podman[461577]: 2025-12-03 02:23:44.511727683 +0000 UTC m=+0.094043397 container remove 9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:23:44 compute-0 systemd[1]: libpod-conmon-9f9e4d33458530b67473c570cbc30d138bdbbb85057fa071ff235d05f6fe5fb3.scope: Deactivated successfully.
Dec  3 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.77611786 +0000 UTC m=+0.069044691 container create 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.747452231 +0000 UTC m=+0.040379142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:23:44 compute-0 systemd[1]: Started libpod-conmon-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope.
Dec  3 02:23:44 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.942972393 +0000 UTC m=+0.235899234 container init 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.953412407 +0000 UTC m=+0.246339258 container start 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:23:44 compute-0 podman[461596]: 2025-12-03 02:23:44.960123327 +0000 UTC m=+0.253050148 container attach 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 02:23:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:23:46 compute-0 angry_lehmann[461612]: {
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "osd_id": 2,
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "type": "bluestore"
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:    },
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "osd_id": 1,
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "type": "bluestore"
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:    },
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "osd_id": 0,
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:        "type": "bluestore"
Dec  3 02:23:46 compute-0 angry_lehmann[461612]:    }
Dec  3 02:23:46 compute-0 angry_lehmann[461612]: }
Dec  3 02:23:46 compute-0 systemd[1]: libpod-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope: Deactivated successfully.
Dec  3 02:23:46 compute-0 systemd[1]: libpod-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope: Consumed 1.237s CPU time.
Dec  3 02:23:46 compute-0 conmon[461612]: conmon 0141180c1ffaa823ec58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope/container/memory.events
Dec  3 02:23:46 compute-0 podman[461596]: 2025-12-03 02:23:46.194024414 +0000 UTC m=+1.486951265 container died 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 02:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc849c55fa1616c71a69da3629075fe8c0fe0a14a87c315095266a921dd263e0-merged.mount: Deactivated successfully.
Dec  3 02:23:46 compute-0 podman[461596]: 2025-12-03 02:23:46.305746649 +0000 UTC m=+1.598673470 container remove 0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  3 02:23:46 compute-0 systemd[1]: libpod-conmon-0141180c1ffaa823ec586c17ee16b13cd68c6068d8ea18b2e301ca7dcf215b1d.scope: Deactivated successfully.
Dec  3 02:23:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:23:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:46 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:23:46 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 269afbdb-67af-4a75-bed8-d3307674805b does not exist
Dec  3 02:23:46 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0a4b85f4-52fd-43d8-b0af-30d3542a26b1 does not exist
Dec  3 02:23:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:46 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:23:46 compute-0 nova_compute[351485]: 2025-12-03 02:23:46.495 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:23:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/7143871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:23:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:23:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/7143871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:23:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.326 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.363 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.364 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Triggering sync for uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.366 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.367 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.368 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.368 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:23:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.421 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.426 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:23:48 compute-0 nova_compute[351485]: 2025-12-03 02:23:48.886 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:50 compute-0 podman[461710]: 2025-12-03 02:23:50.872273166 +0000 UTC m=+0.109807603 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  3 02:23:50 compute-0 podman[461712]: 2025-12-03 02:23:50.874286462 +0000 UTC m=+0.114894795 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:23:50 compute-0 podman[461711]: 2025-12-03 02:23:50.909757544 +0000 UTC m=+0.152085556 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  3 02:23:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:51 compute-0 nova_compute[351485]: 2025-12-03 02:23:51.498 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:53 compute-0 nova_compute[351485]: 2025-12-03 02:23:53.890 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:56 compute-0 nova_compute[351485]: 2025-12-03 02:23:56.502 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:57 compute-0 podman[461773]: 2025-12-03 02:23:57.897444958 +0000 UTC m=+0.139421458 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:23:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.421185) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638421246, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1312, "num_deletes": 256, "total_data_size": 2041298, "memory_usage": 2075120, "flush_reason": "Manual Compaction"}
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638436276, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1988829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41705, "largest_seqno": 43016, "table_properties": {"data_size": 1982625, "index_size": 3471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12826, "raw_average_key_size": 19, "raw_value_size": 1970174, "raw_average_value_size": 2989, "num_data_blocks": 156, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728508, "oldest_key_time": 1764728508, "file_creation_time": 1764728638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 15163 microseconds, and 5418 cpu microseconds.
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.436348) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1988829 bytes OK
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.436379) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.446066) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.446095) EVENT_LOG_v1 {"time_micros": 1764728638446086, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.446119) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2035386, prev total WAL file size 2035386, number of live WAL files 2.
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.447482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373538' seq:0, type:0; will stop at (end)
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1942KB)], [98(7785KB)]
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638447635, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9961682, "oldest_snapshot_seqno": -1}
Dec  3 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:23:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 5912 keys, 9857083 bytes, temperature: kUnknown
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638544096, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9857083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9816802, "index_size": 24427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 153450, "raw_average_key_size": 25, "raw_value_size": 9709120, "raw_average_value_size": 1642, "num_data_blocks": 980, "num_entries": 5912, "num_filter_entries": 5912, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.544331) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9857083 bytes
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.545848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.2 rd, 102.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 6436, records dropped: 524 output_compression: NoCompression
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.545867) EVENT_LOG_v1 {"time_micros": 1764728638545858, "job": 58, "event": "compaction_finished", "compaction_time_micros": 96521, "compaction_time_cpu_micros": 43521, "output_level": 6, "num_output_files": 1, "total_output_size": 9857083, "num_input_records": 6436, "num_output_records": 5912, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638546639, "job": 58, "event": "table_file_deletion", "file_number": 100}
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728638549628, "job": 58, "event": "table_file_deletion", "file_number": 98}
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.447285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:23:58 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:23:58.550059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:23:58 compute-0 nova_compute[351485]: 2025-12-03 02:23:58.895 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:23:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:23:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:23:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:23:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:23:59.658 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:23:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:23:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:23:59 compute-0 podman[158098]: time="2025-12-03T02:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:23:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:23:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8661 "" "Go-http-client/1.1"
Dec  3 02:24:00 compute-0 podman[461796]: 2025-12-03 02:24:00.871982404 +0000 UTC m=+0.104939815 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9)
Dec  3 02:24:00 compute-0 podman[461808]: 2025-12-03 02:24:00.889579781 +0000 UTC m=+0.112900500 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  3 02:24:00 compute-0 podman[461795]: 2025-12-03 02:24:00.889883999 +0000 UTC m=+0.137729210 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:24:00 compute-0 podman[461793]: 2025-12-03 02:24:00.906036466 +0000 UTC m=+0.154997029 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 02:24:00 compute-0 podman[461794]: 2025-12-03 02:24:00.918154078 +0000 UTC m=+0.160156154 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc.)
Dec  3 02:24:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:24:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:24:01 compute-0 openstack_network_exporter[368278]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:24:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:24:01 compute-0 nova_compute[351485]: 2025-12-03 02:24:01.505 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:03 compute-0 nova_compute[351485]: 2025-12-03 02:24:03.900 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:06 compute-0 nova_compute[351485]: 2025-12-03 02:24:06.507 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 02:24:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:08 compute-0 nova_compute[351485]: 2025-12-03 02:24:08.904 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 02:24:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 02:24:11 compute-0 nova_compute[351485]: 2025-12-03 02:24:11.512 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 02:24:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:13 compute-0 nova_compute[351485]: 2025-12-03 02:24:13.908 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 02:24:16 compute-0 nova_compute[351485]: 2025-12-03 02:24:16.515 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:16 compute-0 nova_compute[351485]: 2025-12-03 02:24:16.619 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  3 02:24:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:18 compute-0 nova_compute[351485]: 2025-12-03 02:24:18.914 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:21 compute-0 nova_compute[351485]: 2025-12-03 02:24:21.520 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:21 compute-0 podman[461892]: 2025-12-03 02:24:21.851703226 +0000 UTC m=+0.097688320 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  3 02:24:21 compute-0 podman[461894]: 2025-12-03 02:24:21.871143215 +0000 UTC m=+0.096113375 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:24:21 compute-0 podman[461893]: 2025-12-03 02:24:21.888891546 +0000 UTC m=+0.122649894 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:24:22 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 02:24:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:23 compute-0 nova_compute[351485]: 2025-12-03 02:24:23.919 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.626 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.627 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.627 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:24:25 compute-0 nova_compute[351485]: 2025-12-03 02:24:25.628 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:24:26 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 02:24:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:24:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3878797341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.157 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.275 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.275 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.281 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.282 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.524 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.639 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.641 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.897377014160156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.641 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.773 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.773 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.774 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.774 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:24:26 compute-0 nova_compute[351485]: 2025-12-03 02:24:26.849 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:24:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:24:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880468333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.409 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.422 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.449 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.455 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:24:27 compute-0 nova_compute[351485]: 2025-12-03 02:24:27.457 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:24:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.460 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.460 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.461 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:24:28
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.rgw.root', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Dec  3 02:24:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.838 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.839 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.840 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.841 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:24:28 compute-0 podman[461995]: 2025-12-03 02:24:28.892079348 +0000 UTC m=+0.139856871 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:24:28 compute-0 nova_compute[351485]: 2025-12-03 02:24:28.921 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:24:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:29 compute-0 podman[158098]: time="2025-12-03T02:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:24:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:24:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8659 "" "Go-http-client/1.1"
Dec  3 02:24:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:24:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:24:31 compute-0 openstack_network_exporter[368278]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:24:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.526 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:31 compute-0 podman[462015]: 2025-12-03 02:24:31.852116134 +0000 UTC m=+0.107940049 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.857 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.873 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.874 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.874 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.874 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:31 compute-0 nova_compute[351485]: 2025-12-03 02:24:31.875 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:31 compute-0 podman[462014]: 2025-12-03 02:24:31.888577804 +0000 UTC m=+0.141604180 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  3 02:24:31 compute-0 podman[462026]: 2025-12-03 02:24:31.894390938 +0000 UTC m=+0.116115410 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Dec  3 02:24:31 compute-0 podman[462016]: 2025-12-03 02:24:31.894420589 +0000 UTC m=+0.143020130 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:24:31 compute-0 podman[462033]: 2025-12-03 02:24:31.899802431 +0000 UTC m=+0.133107700 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:24:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:33 compute-0 nova_compute[351485]: 2025-12-03 02:24:33.925 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:33 compute-0 nova_compute[351485]: 2025-12-03 02:24:33.985 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:33 compute-0 nova_compute[351485]: 2025-12-03 02:24:33.986 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:35 compute-0 nova_compute[351485]: 2025-12-03 02:24:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:36 compute-0 nova_compute[351485]: 2025-12-03 02:24:36.530 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:37 compute-0 nova_compute[351485]: 2025-12-03 02:24:37.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:24:37 compute-0 nova_compute[351485]: 2025-12-03 02:24:37.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:24:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:24:38 compute-0 nova_compute[351485]: 2025-12-03 02:24:38.929 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015153665673634173 of space, bias 1.0, pg target 0.45460997020902516 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:24:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:24:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:41 compute-0 nova_compute[351485]: 2025-12-03 02:24:41.532 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:43 compute-0 nova_compute[351485]: 2025-12-03 02:24:43.931 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:46 compute-0 nova_compute[351485]: 2025-12-03 02:24:46.536 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533367272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533367272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:24:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:24:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0fdfa9fa-a666-4191-b4a3-eea286efd634 does not exist
Dec  3 02:24:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 05951b5b-d812-4374-a8f6-6afa5ade34a0 does not exist
Dec  3 02:24:47 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 6a1e8666-8abe-4421-b46d-28a3e31dab9c does not exist
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:24:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:24:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:24:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  3 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:24:48 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.727805243 +0000 UTC m=+0.061003784 container create ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 02:24:48 compute-0 systemd[1]: Started libpod-conmon-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope.
Dec  3 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.707586162 +0000 UTC m=+0.040784723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:24:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.833395375 +0000 UTC m=+0.166593926 container init ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.844412026 +0000 UTC m=+0.177610567 container start ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.84952228 +0000 UTC m=+0.182720841 container attach ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 02:24:48 compute-0 agitated_yalow[462398]: 167 167
Dec  3 02:24:48 compute-0 systemd[1]: libpod-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope: Deactivated successfully.
Dec  3 02:24:48 compute-0 conmon[462398]: conmon ad2c78ff935dfcd8035b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope/container/memory.events
Dec  3 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.854356247 +0000 UTC m=+0.187554808 container died ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1a79ce3013357a07d263f7b41b0e4d45671d4a61b9a552ec9c3d845a35dd6d5-merged.mount: Deactivated successfully.
Dec  3 02:24:48 compute-0 podman[462381]: 2025-12-03 02:24:48.926749981 +0000 UTC m=+0.259948522 container remove ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:24:48 compute-0 nova_compute[351485]: 2025-12-03 02:24:48.932 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:48 compute-0 systemd[1]: libpod-conmon-ad2c78ff935dfcd8035b0ee03e697a19dafe7aef42edc898f0bd343d63ebea83.scope: Deactivated successfully.
Dec  3 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.188909985 +0000 UTC m=+0.095914050 container create 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.158719142 +0000 UTC m=+0.065723197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:24:49 compute-0 systemd[1]: Started libpod-conmon-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope.
Dec  3 02:24:49 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.381618528 +0000 UTC m=+0.288622573 container init 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.393105792 +0000 UTC m=+0.300109857 container start 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:24:49 compute-0 podman[462421]: 2025-12-03 02:24:49.398842674 +0000 UTC m=+0.305846709 container attach 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:24:50 compute-0 nice_meitner[462436]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:24:50 compute-0 nice_meitner[462436]: --> relative data size: 1.0
Dec  3 02:24:50 compute-0 nice_meitner[462436]: --> All data devices are unavailable
Dec  3 02:24:50 compute-0 systemd[1]: libpod-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope: Deactivated successfully.
Dec  3 02:24:50 compute-0 systemd[1]: libpod-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope: Consumed 1.201s CPU time.
Dec  3 02:24:50 compute-0 podman[462468]: 2025-12-03 02:24:50.723767292 +0000 UTC m=+0.053768130 container died 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaec2ef9262ea94ca73421ebc11b6ba9ef1b0ccf49e0f5d72f6559569a72f748-merged.mount: Deactivated successfully.
Dec  3 02:24:50 compute-0 podman[462468]: 2025-12-03 02:24:50.818750264 +0000 UTC m=+0.148751032 container remove 71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 02:24:50 compute-0 systemd[1]: libpod-conmon-71f1f15b3035222bfc0187b1183c2b6bbef184eb76ba614dd2ccd2710cb3be3f.scope: Deactivated successfully.
Dec  3 02:24:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:51 compute-0 nova_compute[351485]: 2025-12-03 02:24:51.539 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.094346729 +0000 UTC m=+0.102967309 container create 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.056674435 +0000 UTC m=+0.065295015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:24:52 compute-0 systemd[1]: Started libpod-conmon-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope.
Dec  3 02:24:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.217191819 +0000 UTC m=+0.225812429 container init 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.229516627 +0000 UTC m=+0.238137187 container start 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.234035834 +0000 UTC m=+0.242656434 container attach 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:24:52 compute-0 systemd[1]: libpod-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope: Deactivated successfully.
Dec  3 02:24:52 compute-0 interesting_kilby[462659]: 167 167
Dec  3 02:24:52 compute-0 conmon[462659]: conmon 1100632859ff574cfa3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope/container/memory.events
Dec  3 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.237120422 +0000 UTC m=+0.245741002 container died 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:24:52 compute-0 podman[462636]: 2025-12-03 02:24:52.249502971 +0000 UTC m=+0.099516161 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7635df6fc40b925ea45303576f590a96f6f2de00d52a90b9ab1d61a7de68c25-merged.mount: Deactivated successfully.
Dec  3 02:24:52 compute-0 podman[462640]: 2025-12-03 02:24:52.268359714 +0000 UTC m=+0.107328422 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:24:52 compute-0 podman[462637]: 2025-12-03 02:24:52.279611562 +0000 UTC m=+0.118614671 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:24:52 compute-0 podman[462622]: 2025-12-03 02:24:52.284966683 +0000 UTC m=+0.293587243 container remove 1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 02:24:52 compute-0 systemd[1]: libpod-conmon-1100632859ff574cfa3df182cb262dbe1792f6fb0dc4fe58c738dfd54ac42119.scope: Deactivated successfully.
Dec  3 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.546156359 +0000 UTC m=+0.090466096 container create 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.510109011 +0000 UTC m=+0.054418798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:24:52 compute-0 systemd[1]: Started libpod-conmon-4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91.scope.
Dec  3 02:24:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.746120247 +0000 UTC m=+0.290429994 container init 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.770860405 +0000 UTC m=+0.315170152 container start 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:24:52 compute-0 podman[462722]: 2025-12-03 02:24:52.777812472 +0000 UTC m=+0.322122219 container attach 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 02:24:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:24:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:53 compute-0 dazzling_bose[462738]: {
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:    "0": [
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:        {
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "devices": [
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "/dev/loop3"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            ],
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_name": "ceph_lv0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_size": "21470642176",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "name": "ceph_lv0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "tags": {
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cluster_name": "ceph",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.crush_device_class": "",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.encrypted": "0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osd_id": "0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.type": "block",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.vdo": "0"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            },
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "type": "block",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "vg_name": "ceph_vg0"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:        }
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:    ],
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:    "1": [
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:        {
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "devices": [
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "/dev/loop4"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            ],
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_name": "ceph_lv1",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_size": "21470642176",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "name": "ceph_lv1",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "tags": {
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cluster_name": "ceph",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.crush_device_class": "",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.encrypted": "0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osd_id": "1",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.type": "block",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.vdo": "0"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            },
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "type": "block",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "vg_name": "ceph_vg1"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:        }
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:    ],
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:    "2": [
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:        {
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "devices": [
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "/dev/loop5"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            ],
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_name": "ceph_lv2",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_size": "21470642176",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "name": "ceph_lv2",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "tags": {
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.cluster_name": "ceph",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.crush_device_class": "",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.encrypted": "0",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osd_id": "2",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.type": "block",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:                "ceph.vdo": "0"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            },
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "type": "block",
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:            "vg_name": "ceph_vg2"
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:        }
Dec  3 02:24:53 compute-0 dazzling_bose[462738]:    ]
Dec  3 02:24:53 compute-0 dazzling_bose[462738]: }
Dec  3 02:24:53 compute-0 systemd[1]: libpod-4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91.scope: Deactivated successfully.
Dec  3 02:24:53 compute-0 podman[462722]: 2025-12-03 02:24:53.617288219 +0000 UTC m=+1.161597936 container died 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f491239df7fdaa6377193002c804a2a6ff9b8611c4708ba6cb63ded6dd77bc3-merged.mount: Deactivated successfully.
Dec  3 02:24:53 compute-0 podman[462722]: 2025-12-03 02:24:53.702781174 +0000 UTC m=+1.247090881 container remove 4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:24:53 compute-0 systemd[1]: libpod-conmon-4e2a3d93c934e59fb2649b0038cc065cef5553e15bca93391b70dc0a89cabd91.scope: Deactivated successfully.
Dec  3 02:24:53 compute-0 nova_compute[351485]: 2025-12-03 02:24:53.936 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.807161453 +0000 UTC m=+0.093543272 container create ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.77374885 +0000 UTC m=+0.060130679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:24:54 compute-0 systemd[1]: Started libpod-conmon-ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c.scope.
Dec  3 02:24:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.966929026 +0000 UTC m=+0.253310905 container init ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.984008138 +0000 UTC m=+0.270389957 container start ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:24:54 compute-0 sad_shirley[462909]: 167 167
Dec  3 02:24:54 compute-0 systemd[1]: libpod-ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c.scope: Deactivated successfully.
Dec  3 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.997790477 +0000 UTC m=+0.284172306 container attach ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:24:54 compute-0 podman[462894]: 2025-12-03 02:24:54.998885968 +0000 UTC m=+0.285267767 container died ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db8adaa06daa907f89749a1a94a3807f1d7624c7cf2cbb499f4d05e15abe972-merged.mount: Deactivated successfully.
Dec  3 02:24:55 compute-0 podman[462894]: 2025-12-03 02:24:55.082220902 +0000 UTC m=+0.368602691 container remove ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shirley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:24:55 compute-0 systemd[1]: libpod-conmon-ccc3734460b17e9f5bc5d444ced6f4c39a972b07ea8cfb41bcf1a1544357169c.scope: Deactivated successfully.
Dec  3 02:24:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 170 B/s wr, 3 op/s
Dec  3 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.406408797 +0000 UTC m=+0.081102031 container create ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.376623106 +0000 UTC m=+0.051316370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:24:55 compute-0 systemd[1]: Started libpod-conmon-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope.
Dec  3 02:24:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.584683152 +0000 UTC m=+0.259376466 container init ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.613930558 +0000 UTC m=+0.288623812 container start ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:24:55 compute-0 podman[462933]: 2025-12-03 02:24:55.619401963 +0000 UTC m=+0.294095227 container attach ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:24:56 compute-0 nova_compute[351485]: 2025-12-03 02:24:56.545 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]: {
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "osd_id": 2,
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "type": "bluestore"
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:    },
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "osd_id": 1,
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "type": "bluestore"
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:    },
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "osd_id": 0,
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:        "type": "bluestore"
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]:    }
Dec  3 02:24:56 compute-0 agitated_goldberg[462947]: }
Dec  3 02:24:56 compute-0 systemd[1]: libpod-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope: Deactivated successfully.
Dec  3 02:24:56 compute-0 podman[462933]: 2025-12-03 02:24:56.850222723 +0000 UTC m=+1.524915977 container died ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:24:56 compute-0 systemd[1]: libpod-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope: Consumed 1.221s CPU time.
Dec  3 02:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-288e483fcdda6f6d9e5220051364972e9df2e24d32d243d758af54dc6a575382-merged.mount: Deactivated successfully.
Dec  3 02:24:56 compute-0 podman[462933]: 2025-12-03 02:24:56.949934529 +0000 UTC m=+1.624627773 container remove ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goldberg, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 02:24:56 compute-0 systemd[1]: libpod-conmon-ce69aa323dc7ac4f89a287695467953874dfb559a15b8c9daf5c1b1c3c8bd0fb.scope: Deactivated successfully.
Dec  3 02:24:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:24:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:24:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:24:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:24:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 062907cc-b64d-4ecf-a366-91930dcfb4d1 does not exist
Dec  3 02:24:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev df2ed028-2931-473c-9049-cdbf2e6dd2db does not exist
Dec  3 02:24:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec  3 02:24:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:24:58 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:24:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:24:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:24:58 compute-0 nova_compute[351485]: 2025-12-03 02:24:58.937 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:24:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 255 B/s wr, 4 op/s
Dec  3 02:24:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:24:59.657 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:24:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:24:59.658 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:24:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:24:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:24:59 compute-0 podman[158098]: time="2025-12-03T02:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:24:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:24:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8668 "" "Go-http-client/1.1"
Dec  3 02:24:59 compute-0 podman[463045]: 2025-12-03 02:24:59.90620986 +0000 UTC m=+0.152033665 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:25:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:25:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:25:01 compute-0 openstack_network_exporter[368278]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:25:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:25:01 compute-0 nova_compute[351485]: 2025-12-03 02:25:01.550 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:02 compute-0 podman[463066]: 2025-12-03 02:25:02.867495491 +0000 UTC m=+0.099389028 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:25:02 compute-0 podman[463079]: 2025-12-03 02:25:02.868454028 +0000 UTC m=+0.084982651 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec  3 02:25:02 compute-0 podman[463065]: 2025-12-03 02:25:02.874211381 +0000 UTC m=+0.124372704 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal)
Dec  3 02:25:02 compute-0 podman[463067]: 2025-12-03 02:25:02.877091132 +0000 UTC m=+0.109573315 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64)
Dec  3 02:25:02 compute-0 podman[463064]: 2025-12-03 02:25:02.905085803 +0000 UTC m=+0.154569817 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:25:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 02:25:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:03 compute-0 nova_compute[351485]: 2025-12-03 02:25:03.941 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 02:25:06 compute-0 nova_compute[351485]: 2025-12-03 02:25:06.553 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 8.4 KiB/s wr, 1 op/s
Dec  3 02:25:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:08 compute-0 nova_compute[351485]: 2025-12-03 02:25:08.944 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec  3 02:25:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.3 KiB/s wr, 0 op/s
Dec  3 02:25:11 compute-0 nova_compute[351485]: 2025-12-03 02:25:11.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Dec  3 02:25:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:13 compute-0 nova_compute[351485]: 2025-12-03 02:25:13.950 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec  3 02:25:16 compute-0 nova_compute[351485]: 2025-12-03 02:25:16.560 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 02:25:17 compute-0 nova_compute[351485]: 2025-12-03 02:25:17.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:18 compute-0 nova_compute[351485]: 2025-12-03 02:25:18.955 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.513 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.514 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.526 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.531 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:25:19.532699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.571 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 43.55859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.611 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 42.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.612 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:25:19.613157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.619 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.626 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.628 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.631 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.631 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:25:19.628167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:25:19.630865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.633 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.634 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.635 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:25:19.633676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.636 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.637 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:25:19.636675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:25:19.639320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.653 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.654 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.669 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.670 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.671 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:25:19.672231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.710 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.711 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.750 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.750 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.752 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.753 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.754 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:25:19.752607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.755 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3251057957 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.756 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 228292831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.756 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.757 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:25:19.755466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.759 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.759 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.760 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.760 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.761 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:25:19.759092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.762 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:25:19.762928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.763 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.764 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.765 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.766 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:25:19.765324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.767 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.767 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.768 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.769 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 72830976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.770 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.770 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73048064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:25:19.769600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.771 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 8629084086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.773 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.774 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10027508187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.774 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:25:19.773152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.776 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.777 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.777 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.778 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.778 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:25:19.776463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.779 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:25:19.779947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.780 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.781 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 171590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:25:19.782226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.782 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 334860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.783 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.783 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:25:19.784234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:25:19.785468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.785 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.786 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.787 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:25:19.786999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.788 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.789 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.791 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:25:19.789274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:25:19.790781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:25:19.791924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:25:19.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:25:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 02:25:21 compute-0 nova_compute[351485]: 2025-12-03 02:25:21.563 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:22 compute-0 podman[463168]: 2025-12-03 02:25:22.906213637 +0000 UTC m=+0.130852296 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:25:22 compute-0 podman[463166]: 2025-12-03 02:25:22.910438607 +0000 UTC m=+0.151732326 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:25:22 compute-0 podman[463167]: 2025-12-03 02:25:22.920892302 +0000 UTC m=+0.154830644 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:25:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  3 02:25:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:23 compute-0 nova_compute[351485]: 2025-12-03 02:25:23.958 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  3 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:25:25 compute-0 nova_compute[351485]: 2025-12-03 02:25:25.622 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:25:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:25:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2037201336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.517 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.518 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.528 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.528 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:25:26 compute-0 nova_compute[351485]: 2025-12-03 02:25:26.566 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.109 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.111 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3536MB free_disk=59.89719772338867GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.111 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.112 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.219 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.220 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.220 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.220 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.267 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.303 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.304 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.322 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.350 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:25:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.448 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:25:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:25:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680197914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.922 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.933 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.957 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.958 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:25:27 compute-0 nova_compute[351485]: 2025-12-03 02:25:27.959 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:25:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:25:28
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'images', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Dec  3 02:25:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:25:28 compute-0 nova_compute[351485]: 2025-12-03 02:25:28.959 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:28 compute-0 nova_compute[351485]: 2025-12-03 02:25:28.960 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:25:28 compute-0 nova_compute[351485]: 2025-12-03 02:25:28.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:25:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:25:29 compute-0 podman[158098]: time="2025-12-03T02:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:25:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:25:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  3 02:25:29 compute-0 nova_compute[351485]: 2025-12-03 02:25:29.884 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:25:29 compute-0 nova_compute[351485]: 2025-12-03 02:25:29.885 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:25:29 compute-0 nova_compute[351485]: 2025-12-03 02:25:29.885 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:25:30 compute-0 podman[463270]: 2025-12-03 02:25:30.878721915 +0000 UTC m=+0.132606957 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_managed=true)
Dec  3 02:25:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:25:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:25:31 compute-0 openstack_network_exporter[368278]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:25:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.436 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.457 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.457 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.458 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.570 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:31 compute-0 nova_compute[351485]: 2025-12-03 02:25:31.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:25:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:33 compute-0 nova_compute[351485]: 2025-12-03 02:25:33.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:33 compute-0 podman[463289]: 2025-12-03 02:25:33.873283806 +0000 UTC m=+0.103926756 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:25:33 compute-0 podman[463298]: 2025-12-03 02:25:33.882894517 +0000 UTC m=+0.100800338 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  3 02:25:33 compute-0 podman[463293]: 2025-12-03 02:25:33.885867091 +0000 UTC m=+0.102186057 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 02:25:33 compute-0 podman[463288]: 2025-12-03 02:25:33.90283675 +0000 UTC m=+0.145334125 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, release=1755695350)
Dec  3 02:25:33 compute-0 podman[463287]: 2025-12-03 02:25:33.908605013 +0000 UTC m=+0.158295061 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  3 02:25:33 compute-0 nova_compute[351485]: 2025-12-03 02:25:33.964 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:25:35 compute-0 nova_compute[351485]: 2025-12-03 02:25:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.742453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735742590, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 992, "num_deletes": 251, "total_data_size": 1439534, "memory_usage": 1461728, "flush_reason": "Manual Compaction"}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735757099, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1426161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43017, "largest_seqno": 44008, "table_properties": {"data_size": 1421230, "index_size": 2519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10518, "raw_average_key_size": 19, "raw_value_size": 1411385, "raw_average_value_size": 2638, "num_data_blocks": 113, "num_entries": 535, "num_filter_entries": 535, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728638, "oldest_key_time": 1764728638, "file_creation_time": 1764728735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 14752 microseconds, and 7914 cpu microseconds.
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.757210) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1426161 bytes OK
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.757238) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.760289) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.760312) EVENT_LOG_v1 {"time_micros": 1764728735760305, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.760334) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1434835, prev total WAL file size 1434835, number of live WAL files 2.
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.761777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1392KB)], [101(9626KB)]
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735761861, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11283244, "oldest_snapshot_seqno": -1}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5933 keys, 9589538 bytes, temperature: kUnknown
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735846081, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9589538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9549315, "index_size": 24305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 154541, "raw_average_key_size": 26, "raw_value_size": 9441454, "raw_average_value_size": 1591, "num_data_blocks": 969, "num_entries": 5933, "num_filter_entries": 5933, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.846298) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9589538 bytes
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.848812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(14.6) write-amplify(6.7) OK, records in: 6447, records dropped: 514 output_compression: NoCompression
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.848831) EVENT_LOG_v1 {"time_micros": 1764728735848822, "job": 60, "event": "compaction_finished", "compaction_time_micros": 84273, "compaction_time_cpu_micros": 35775, "output_level": 6, "num_output_files": 1, "total_output_size": 9589538, "num_input_records": 6447, "num_output_records": 5933, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735849222, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728735851023, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.761496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:35 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:35.851229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:36 compute-0 nova_compute[351485]: 2025-12-03 02:25:36.574 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:25:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518418921338803 of space, bias 1.0, pg target 0.45552567640164093 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:25:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:25:38 compute-0 nova_compute[351485]: 2025-12-03 02:25:38.969 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:25:39 compute-0 nova_compute[351485]: 2025-12-03 02:25:39.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:25:39 compute-0 nova_compute[351485]: 2025-12-03 02:25:39.581 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:25:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec  3 02:25:41 compute-0 nova_compute[351485]: 2025-12-03 02:25:41.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:25:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:43 compute-0 nova_compute[351485]: 2025-12-03 02:25:43.975 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:25:46 compute-0 nova_compute[351485]: 2025-12-03 02:25:46.582 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:25:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/779942832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:25:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:25:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/779942832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:25:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:25:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.449848) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748449901, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 347, "num_deletes": 250, "total_data_size": 183587, "memory_usage": 189536, "flush_reason": "Manual Compaction"}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748456294, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 181513, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44009, "largest_seqno": 44355, "table_properties": {"data_size": 179351, "index_size": 326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5904, "raw_average_key_size": 20, "raw_value_size": 175107, "raw_average_value_size": 599, "num_data_blocks": 15, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728736, "oldest_key_time": 1764728736, "file_creation_time": 1764728748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 6792 microseconds, and 2853 cpu microseconds.
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.456636) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 181513 bytes OK
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.456668) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.461342) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.461370) EVENT_LOG_v1 {"time_micros": 1764728748461362, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.461393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 181244, prev total WAL file size 181244, number of live WAL files 2.
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.462654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373534' seq:72057594037927935, type:22 .. '6D6772737461740032303035' seq:0, type:0; will stop at (end)
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(177KB)], [104(9364KB)]
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748462699, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9771051, "oldest_snapshot_seqno": -1}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5718 keys, 6482067 bytes, temperature: kUnknown
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748537393, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6482067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6448055, "index_size": 18606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 150260, "raw_average_key_size": 26, "raw_value_size": 6348636, "raw_average_value_size": 1110, "num_data_blocks": 733, "num_entries": 5718, "num_filter_entries": 5718, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.537895) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6482067 bytes
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.540883) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.6 rd, 86.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(89.5) write-amplify(35.7) OK, records in: 6225, records dropped: 507 output_compression: NoCompression
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.540917) EVENT_LOG_v1 {"time_micros": 1764728748540902, "job": 62, "event": "compaction_finished", "compaction_time_micros": 74797, "compaction_time_cpu_micros": 43343, "output_level": 6, "num_output_files": 1, "total_output_size": 6482067, "num_input_records": 6225, "num_output_records": 5718, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748541171, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728748545657, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.462337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:25:48.545803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:25:48 compute-0 nova_compute[351485]: 2025-12-03 02:25:48.977 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:25:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:25:51 compute-0 nova_compute[351485]: 2025-12-03 02:25:51.584 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:25:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:53 compute-0 podman[463389]: 2025-12-03 02:25:53.868722281 +0000 UTC m=+0.110924994 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:25:53 compute-0 podman[463388]: 2025-12-03 02:25:53.884475916 +0000 UTC m=+0.130200109 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 02:25:53 compute-0 podman[463387]: 2025-12-03 02:25:53.910194172 +0000 UTC m=+0.156560453 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Dec  3 02:25:53 compute-0 nova_compute[351485]: 2025-12-03 02:25:53.981 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:25:56 compute-0 nova_compute[351485]: 2025-12-03 02:25:56.587 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 821a63a6-4efd-4fd7-b072-7a905e991501 does not exist
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 491b2b96-98fb-4527-bacd-9f6256d7ff92 does not exist
Dec  3 02:25:58 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev c14a9550-023e-4129-af02-e856ce723cb7 does not exist
Dec  3 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:25:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:25:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:25:58 compute-0 nova_compute[351485]: 2025-12-03 02:25:58.984 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:25:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:25:59 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:25:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:25:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:25:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:25:59.659 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:25:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:25:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:25:59 compute-0 podman[158098]: time="2025-12-03T02:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.764284141 +0000 UTC m=+0.088297805 container create 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:25:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:25:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  3 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.726123763 +0000 UTC m=+0.050137497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:25:59 compute-0 systemd[1]: Started libpod-conmon-132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20.scope.
Dec  3 02:25:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.936052712 +0000 UTC m=+0.260066406 container init 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.95299545 +0000 UTC m=+0.277009124 container start 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.958778504 +0000 UTC m=+0.282792268 container attach 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 02:25:59 compute-0 intelligent_liskov[463733]: 167 167
Dec  3 02:25:59 compute-0 podman[463719]: 2025-12-03 02:25:59.969870767 +0000 UTC m=+0.293884461 container died 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  3 02:25:59 compute-0 systemd[1]: libpod-132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20.scope: Deactivated successfully.
Dec  3 02:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-935fe44948038371fd171d1482a698dff8d9d60f17c28b04933cb3663125405d-merged.mount: Deactivated successfully.
Dec  3 02:26:00 compute-0 podman[463719]: 2025-12-03 02:26:00.103220323 +0000 UTC m=+0.427233987 container remove 132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 02:26:00 compute-0 systemd[1]: libpod-conmon-132f0686ef8efba164f23dbed5afac7098fe5e5263df2d42f2fd578f48a95f20.scope: Deactivated successfully.
Dec  3 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.40374089 +0000 UTC m=+0.084438925 container create 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  3 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.35982992 +0000 UTC m=+0.040527995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:26:00 compute-0 systemd[1]: Started libpod-conmon-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope.
Dec  3 02:26:00 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.578836905 +0000 UTC m=+0.259534970 container init 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.605161389 +0000 UTC m=+0.285859444 container start 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:26:00 compute-0 podman[463759]: 2025-12-03 02:26:00.619797712 +0000 UTC m=+0.300495767 container attach 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:26:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:26:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:26:01 compute-0 openstack_network_exporter[368278]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:26:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:26:01 compute-0 nova_compute[351485]: 2025-12-03 02:26:01.591 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:01 compute-0 podman[463798]: 2025-12-03 02:26:01.861861829 +0000 UTC m=+0.109232015 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:26:01 compute-0 silly_jepsen[463777]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:26:01 compute-0 silly_jepsen[463777]: --> relative data size: 1.0
Dec  3 02:26:01 compute-0 silly_jepsen[463777]: --> All data devices are unavailable
Dec  3 02:26:01 compute-0 systemd[1]: libpod-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope: Deactivated successfully.
Dec  3 02:26:01 compute-0 systemd[1]: libpod-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope: Consumed 1.216s CPU time.
Dec  3 02:26:01 compute-0 conmon[463777]: conmon 5ea28a27c4e9e32f97e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope/container/memory.events
Dec  3 02:26:01 compute-0 podman[463759]: 2025-12-03 02:26:01.918772517 +0000 UTC m=+1.599470572 container died 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-68eb2cdf4a98317bc2c1dbdfe3c3c920f544eccf47c35d3c2a7a59f667ed3790-merged.mount: Deactivated successfully.
Dec  3 02:26:02 compute-0 podman[463759]: 2025-12-03 02:26:02.018324398 +0000 UTC m=+1.699022433 container remove 5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:26:02 compute-0 systemd[1]: libpod-conmon-5ea28a27c4e9e32f97e13ee90634d4000ccda72017e5ce1c6ebfc5567d0cbdd7.scope: Deactivated successfully.
Dec  3 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.103576938 +0000 UTC m=+0.094093678 container create 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.07108432 +0000 UTC m=+0.061601100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:26:03 compute-0 systemd[1]: Started libpod-conmon-8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873.scope.
Dec  3 02:26:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.267678503 +0000 UTC m=+0.258195253 container init 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.281970106 +0000 UTC m=+0.272486846 container start 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.287345178 +0000 UTC m=+0.277861918 container attach 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:26:03 compute-0 quizzical_khorana[463994]: 167 167
Dec  3 02:26:03 compute-0 systemd[1]: libpod-8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873.scope: Deactivated successfully.
Dec  3 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.296389243 +0000 UTC m=+0.286905973 container died 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f413ddd69871522dc3d0db570d876e7b81f10851ce4054ef035d6839a6fca279-merged.mount: Deactivated successfully.
Dec  3 02:26:03 compute-0 podman[463978]: 2025-12-03 02:26:03.366859894 +0000 UTC m=+0.357376664 container remove 8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:26:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:03 compute-0 systemd[1]: libpod-conmon-8cf6e3e0e6e16060a0d4d14bd7d94a3e2e7c2761db517cc45cd42c33bd1d2873.scope: Deactivated successfully.
Dec  3 02:26:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.600150282 +0000 UTC m=+0.074249408 container create 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.571641087 +0000 UTC m=+0.045740203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:26:03 compute-0 systemd[1]: Started libpod-conmon-4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee.scope.
Dec  3 02:26:03 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.792061832 +0000 UTC m=+0.266160948 container init 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.805054459 +0000 UTC m=+0.279153545 container start 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:26:03 compute-0 podman[464017]: 2025-12-03 02:26:03.809170755 +0000 UTC m=+0.283269871 container attach 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec  3 02:26:03 compute-0 nova_compute[351485]: 2025-12-03 02:26:03.987 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:04 compute-0 thirsty_noether[464033]: {
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:    "0": [
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:        {
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "devices": [
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "/dev/loop3"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            ],
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_name": "ceph_lv0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_size": "21470642176",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "name": "ceph_lv0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "tags": {
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cluster_name": "ceph",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.crush_device_class": "",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.encrypted": "0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osd_id": "0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.type": "block",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.vdo": "0"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            },
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "type": "block",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "vg_name": "ceph_vg0"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:        }
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:    ],
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:    "1": [
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:        {
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "devices": [
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "/dev/loop4"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            ],
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_name": "ceph_lv1",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_size": "21470642176",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "name": "ceph_lv1",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "tags": {
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cluster_name": "ceph",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.crush_device_class": "",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.encrypted": "0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osd_id": "1",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.type": "block",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.vdo": "0"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            },
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "type": "block",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "vg_name": "ceph_vg1"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:        }
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:    ],
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:    "2": [
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:        {
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "devices": [
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "/dev/loop5"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            ],
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_name": "ceph_lv2",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_size": "21470642176",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "name": "ceph_lv2",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "tags": {
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.cluster_name": "ceph",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.crush_device_class": "",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.encrypted": "0",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osd_id": "2",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.type": "block",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:                "ceph.vdo": "0"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            },
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "type": "block",
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:            "vg_name": "ceph_vg2"
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:        }
Dec  3 02:26:04 compute-0 thirsty_noether[464033]:    ]
Dec  3 02:26:04 compute-0 thirsty_noether[464033]: }
Dec  3 02:26:04 compute-0 systemd[1]: libpod-4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee.scope: Deactivated successfully.
Dec  3 02:26:04 compute-0 podman[464017]: 2025-12-03 02:26:04.632883729 +0000 UTC m=+1.106982835 container died 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  3 02:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-89fe6fb322c73f5c71e2da91bcabcd68183b071daba927157406efbe360cec95-merged.mount: Deactivated successfully.
Dec  3 02:26:04 compute-0 podman[464017]: 2025-12-03 02:26:04.721676676 +0000 UTC m=+1.195775762 container remove 4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 02:26:04 compute-0 systemd[1]: libpod-conmon-4c18f7bf248bf0bbbe7d547a79aa2438e32aae336247b0925b5dbb928c8831ee.scope: Deactivated successfully.
Dec  3 02:26:04 compute-0 podman[464053]: 2025-12-03 02:26:04.803332411 +0000 UTC m=+0.115399589 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec  3 02:26:04 compute-0 podman[464050]: 2025-12-03 02:26:04.812368717 +0000 UTC m=+0.119749752 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 02:26:04 compute-0 podman[464045]: 2025-12-03 02:26:04.837153507 +0000 UTC m=+0.143735900 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 02:26:04 compute-0 podman[464046]: 2025-12-03 02:26:04.84791339 +0000 UTC m=+0.139219521 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:26:04 compute-0 podman[464043]: 2025-12-03 02:26:04.871465306 +0000 UTC m=+0.183344718 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:26:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.842591212 +0000 UTC m=+0.120613387 container create e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.772635116 +0000 UTC m=+0.050657331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:26:05 compute-0 systemd[1]: Started libpod-conmon-e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1.scope.
Dec  3 02:26:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.98592276 +0000 UTC m=+0.263944945 container init e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.995498801 +0000 UTC m=+0.273520936 container start e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:26:05 compute-0 podman[464289]: 2025-12-03 02:26:05.999086422 +0000 UTC m=+0.277108557 container attach e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 02:26:06 compute-0 sweet_beaver[464304]: 167 167
Dec  3 02:26:06 compute-0 systemd[1]: libpod-e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1.scope: Deactivated successfully.
Dec  3 02:26:06 compute-0 podman[464289]: 2025-12-03 02:26:06.005780841 +0000 UTC m=+0.283802976 container died e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5b560a0cc2e69da758a20454ffc9e76fc0488000393b32db40884ff4676a606-merged.mount: Deactivated successfully.
Dec  3 02:26:06 compute-0 podman[464289]: 2025-12-03 02:26:06.051844512 +0000 UTC m=+0.329866647 container remove e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 02:26:06 compute-0 systemd[1]: libpod-conmon-e532c4dcf0f1e3cb5f26f999868e7f8d1374dba2143461c6591b959f0cfc54b1.scope: Deactivated successfully.
Dec  3 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.336775019 +0000 UTC m=+0.088153221 container create b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.312287707 +0000 UTC m=+0.063665919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:26:06 compute-0 systemd[1]: Started libpod-conmon-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope.
Dec  3 02:26:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.516264348 +0000 UTC m=+0.267642580 container init b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.542827558 +0000 UTC m=+0.294205800 container start b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  3 02:26:06 compute-0 podman[464327]: 2025-12-03 02:26:06.550085103 +0000 UTC m=+0.301463335 container attach b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:26:06 compute-0 nova_compute[351485]: 2025-12-03 02:26:06.595 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]: {
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "osd_id": 2,
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "type": "bluestore"
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:    },
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "osd_id": 1,
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "type": "bluestore"
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:    },
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "osd_id": 0,
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:        "type": "bluestore"
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]:    }
Dec  3 02:26:07 compute-0 distracted_khayyam[464344]: }
Dec  3 02:26:07 compute-0 systemd[1]: libpod-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope: Deactivated successfully.
Dec  3 02:26:07 compute-0 podman[464327]: 2025-12-03 02:26:07.719281504 +0000 UTC m=+1.470659726 container died b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:26:07 compute-0 systemd[1]: libpod-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope: Consumed 1.184s CPU time.
Dec  3 02:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e8e54b38003ca44e5a3e971b52d02c5bb077c1e005deba7e67b3a82a0041bc-merged.mount: Deactivated successfully.
Dec  3 02:26:07 compute-0 podman[464327]: 2025-12-03 02:26:07.791453472 +0000 UTC m=+1.542831664 container remove b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:26:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:26:07 compute-0 systemd[1]: libpod-conmon-b1699bd0e29b7633f541d80a947fd74514efb4f93c746c8bdbc5eed3afe1626d.scope: Deactivated successfully.
Dec  3 02:26:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:26:07 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:26:07 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:26:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 95ee77f6-f4bb-47c2-9973-3651272121a0 does not exist
Dec  3 02:26:07 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8db5fe37-d429-4410-bcd9-8c03f2ba9e87 does not exist
Dec  3 02:26:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:26:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:26:08 compute-0 nova_compute[351485]: 2025-12-03 02:26:08.989 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:11 compute-0 nova_compute[351485]: 2025-12-03 02:26:11.599 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:13 compute-0 nova_compute[351485]: 2025-12-03 02:26:13.993 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:16 compute-0 nova_compute[351485]: 2025-12-03 02:26:16.603 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:17 compute-0 nova_compute[351485]: 2025-12-03 02:26:17.583 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:18 compute-0 nova_compute[351485]: 2025-12-03 02:26:18.998 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:21 compute-0 nova_compute[351485]: 2025-12-03 02:26:21.607 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:24 compute-0 nova_compute[351485]: 2025-12-03 02:26:24.001 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:24 compute-0 podman[464440]: 2025-12-03 02:26:24.898130253 +0000 UTC m=+0.133999835 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:26:24 compute-0 podman[464442]: 2025-12-03 02:26:24.898826403 +0000 UTC m=+0.130344592 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:26:24 compute-0 podman[464441]: 2025-12-03 02:26:24.903365991 +0000 UTC m=+0.134030426 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  3 02:26:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.620 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.622 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:26:25 compute-0 nova_compute[351485]: 2025-12-03 02:26:25.622 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:26:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:26:26 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295606808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.244 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.357 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.358 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.365 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.365 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.610 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.908 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.909 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3513MB free_disk=59.897193908691406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.910 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:26:26 compute-0 nova_compute[351485]: 2025-12-03 02:26:26.910 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.011 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.012 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.012 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.013 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.081 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:26:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:27 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:26:27 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175832973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.625 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.636 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.655 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.658 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:26:27 compute-0 nova_compute[351485]: 2025-12-03 02:26:27.660 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:26:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:26:28
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms', 'volumes', 'images', 'default.rgw.meta', 'backups', '.mgr']
Dec  3 02:26:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.004 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:26:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.661 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.662 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:26:29 compute-0 nova_compute[351485]: 2025-12-03 02:26:29.662 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:26:29 compute-0 podman[158098]: time="2025-12-03T02:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:26:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec  3 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.148 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.150 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.151 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:26:30 compute-0 nova_compute[351485]: 2025-12-03 02:26:30.152 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:26:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:26:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:26:31 compute-0 openstack_network_exporter[368278]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:26:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:26:31 compute-0 nova_compute[351485]: 2025-12-03 02:26:31.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.599 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.620 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.621 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:32 compute-0 nova_compute[351485]: 2025-12-03 02:26:32.622 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:32 compute-0 podman[464541]: 2025-12-03 02:26:32.891384566 +0000 UTC m=+0.137708940 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3)
Dec  3 02:26:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:33 compute-0 nova_compute[351485]: 2025-12-03 02:26:33.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:33 compute-0 nova_compute[351485]: 2025-12-03 02:26:33.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:33 compute-0 nova_compute[351485]: 2025-12-03 02:26:33.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:34 compute-0 nova_compute[351485]: 2025-12-03 02:26:34.010 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:35 compute-0 nova_compute[351485]: 2025-12-03 02:26:35.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:35 compute-0 podman[464564]: 2025-12-03 02:26:35.895087076 +0000 UTC m=+0.105162251 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:26:35 compute-0 podman[464569]: 2025-12-03 02:26:35.895326752 +0000 UTC m=+0.110959534 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 02:26:35 compute-0 podman[464562]: 2025-12-03 02:26:35.911971172 +0000 UTC m=+0.142120064 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Dec  3 02:26:35 compute-0 podman[464563]: 2025-12-03 02:26:35.916399547 +0000 UTC m=+0.134896230 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:26:35 compute-0 podman[464561]: 2025-12-03 02:26:35.958466216 +0000 UTC m=+0.193716302 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:26:36 compute-0 nova_compute[351485]: 2025-12-03 02:26:36.618 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518418921338803 of space, bias 1.0, pg target 0.45552567640164093 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:26:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:26:39 compute-0 nova_compute[351485]: 2025-12-03 02:26:39.014 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:39 compute-0 nova_compute[351485]: 2025-12-03 02:26:39.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:26:39 compute-0 nova_compute[351485]: 2025-12-03 02:26:39.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:26:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:41 compute-0 nova_compute[351485]: 2025-12-03 02:26:41.622 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:44 compute-0 nova_compute[351485]: 2025-12-03 02:26:44.019 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:46 compute-0 nova_compute[351485]: 2025-12-03 02:26:46.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:26:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3404991307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:26:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:26:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3404991307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:26:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:49 compute-0 nova_compute[351485]: 2025-12-03 02:26:49.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:51 compute-0 nova_compute[351485]: 2025-12-03 02:26:51.631 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:54 compute-0 nova_compute[351485]: 2025-12-03 02:26:54.032 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:55 compute-0 podman[464661]: 2025-12-03 02:26:55.876601006 +0000 UTC m=+0.123926941 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:26:55 compute-0 podman[464663]: 2025-12-03 02:26:55.903035593 +0000 UTC m=+0.135779246 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:26:55 compute-0 podman[464662]: 2025-12-03 02:26:55.93019747 +0000 UTC m=+0.170128736 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  3 02:26:56 compute-0 nova_compute[351485]: 2025-12-03 02:26:56.634 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:26:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:26:59 compute-0 nova_compute[351485]: 2025-12-03 02:26:59.037 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:26:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:26:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:26:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:26:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:26:59.660 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:26:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:26:59.661 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:26:59 compute-0 podman[158098]: time="2025-12-03T02:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:26:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8659 "" "Go-http-client/1.1"
Dec  3 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:27:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:27:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:27:01 compute-0 openstack_network_exporter[368278]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:27:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:27:01 compute-0 nova_compute[351485]: 2025-12-03 02:27:01.637 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:03 compute-0 podman[464720]: 2025-12-03 02:27:03.8850943 +0000 UTC m=+0.136739853 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  3 02:27:04 compute-0 nova_compute[351485]: 2025-12-03 02:27:04.042 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:06 compute-0 nova_compute[351485]: 2025-12-03 02:27:06.640 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:06 compute-0 podman[464742]: 2025-12-03 02:27:06.882291345 +0000 UTC m=+0.104125721 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:27:06 compute-0 podman[464748]: 2025-12-03 02:27:06.890295201 +0000 UTC m=+0.106115287 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec  3 02:27:06 compute-0 podman[464741]: 2025-12-03 02:27:06.895358685 +0000 UTC m=+0.132912725 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec  3 02:27:06 compute-0 podman[464743]: 2025-12-03 02:27:06.905057768 +0000 UTC m=+0.125899036 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, container_name=kepler, config_id=edpm, vcs-type=git, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec  3 02:27:06 compute-0 podman[464740]: 2025-12-03 02:27:06.922937093 +0000 UTC m=+0.170161416 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 02:27:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:09 compute-0 nova_compute[351485]: 2025-12-03 02:27:09.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:09 compute-0 podman[465011]: 2025-12-03 02:27:09.46834236 +0000 UTC m=+0.131322140 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 02:27:09 compute-0 podman[465011]: 2025-12-03 02:27:09.577197334 +0000 UTC m=+0.240177104 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:27:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:27:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:27:10 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:11 compute-0 nova_compute[351485]: 2025-12-03 02:27:11.644 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:11 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 31e132f0-f288-46b4-bb24-1b0bc05342e5 does not exist
Dec  3 02:27:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 2df69b2f-1092-4b89-83a2-5b14f643ab73 does not exist
Dec  3 02:27:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 76c15bc3-1c08-4231-9186-4cb2f8fdba75 does not exist
Dec  3 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:27:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:27:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.105577011 +0000 UTC m=+0.087590814 container create f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.078299731 +0000 UTC m=+0.060313544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:27:13 compute-0 systemd[1]: Started libpod-conmon-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope.
Dec  3 02:27:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.270817068 +0000 UTC m=+0.252830931 container init f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  3 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.282054865 +0000 UTC m=+0.264068678 container start f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.289065653 +0000 UTC m=+0.271079536 container attach f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:27:13 compute-0 suspicious_jemison[465446]: 167 167
Dec  3 02:27:13 compute-0 systemd[1]: libpod-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope: Deactivated successfully.
Dec  3 02:27:13 compute-0 conmon[465446]: conmon f9b4bb93101542a7b1a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope/container/memory.events
Dec  3 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.296672208 +0000 UTC m=+0.278686031 container died f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-380a6acbd13631bc8d7c17e7389bb52efa36addc3ff5c355fffa5abcebc4cd51-merged.mount: Deactivated successfully.
Dec  3 02:27:13 compute-0 podman[465431]: 2025-12-03 02:27:13.356977611 +0000 UTC m=+0.338991384 container remove f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:27:13 compute-0 systemd[1]: libpod-conmon-f9b4bb93101542a7b1a62d51476eb912f0d4c96d01a44d84c81a8744e41a362a.scope: Deactivated successfully.
Dec  3 02:27:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.681049154 +0000 UTC m=+0.110889213 container create 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.60939329 +0000 UTC m=+0.039233409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:27:13 compute-0 systemd[1]: Started libpod-conmon-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope.
Dec  3 02:27:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.876848829 +0000 UTC m=+0.306688888 container init 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.899369236 +0000 UTC m=+0.329209285 container start 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:27:13 compute-0 podman[465471]: 2025-12-03 02:27:13.905308043 +0000 UTC m=+0.335148112 container attach 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:27:14 compute-0 nova_compute[351485]: 2025-12-03 02:27:14.051 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:15 compute-0 cool_hawking[465487]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:27:15 compute-0 cool_hawking[465487]: --> relative data size: 1.0
Dec  3 02:27:15 compute-0 cool_hawking[465487]: --> All data devices are unavailable
Dec  3 02:27:15 compute-0 podman[465471]: 2025-12-03 02:27:15.139690239 +0000 UTC m=+1.569530328 container died 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:27:15 compute-0 systemd[1]: libpod-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope: Deactivated successfully.
Dec  3 02:27:15 compute-0 systemd[1]: libpod-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope: Consumed 1.189s CPU time.
Dec  3 02:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed11ca9b9e7fc749c802fdec75c1c3e755eeaec5dd2c0383fc7dc0eac2f528b0-merged.mount: Deactivated successfully.
Dec  3 02:27:15 compute-0 podman[465471]: 2025-12-03 02:27:15.253678978 +0000 UTC m=+1.683518997 container remove 662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec  3 02:27:15 compute-0 systemd[1]: libpod-conmon-662089ba4fdaf6ac350a23ac3dbcab26c34221766b36d89fd0c935e9a5f728eb.scope: Deactivated successfully.
Dec  3 02:27:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.286305771 +0000 UTC m=+0.080603247 container create 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.258729472 +0000 UTC m=+0.053026978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:27:16 compute-0 systemd[1]: Started libpod-conmon-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope.
Dec  3 02:27:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.43147856 +0000 UTC m=+0.225776056 container init 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.444083136 +0000 UTC m=+0.238380632 container start 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.450499127 +0000 UTC m=+0.244796623 container attach 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:27:16 compute-0 systemd[1]: libpod-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope: Deactivated successfully.
Dec  3 02:27:16 compute-0 conmon[465681]: conmon 9210e7778fd89491a0c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope/container/memory.events
Dec  3 02:27:16 compute-0 clever_villani[465681]: 167 167
Dec  3 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.456351793 +0000 UTC m=+0.250649279 container died 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 02:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a8c53211283b7908a8e5868ead20a9bb606ce7e7a9ce4ab809aa8ed859c566-merged.mount: Deactivated successfully.
Dec  3 02:27:16 compute-0 podman[465665]: 2025-12-03 02:27:16.513662221 +0000 UTC m=+0.307959687 container remove 9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:27:16 compute-0 systemd[1]: libpod-conmon-9210e7778fd89491a0c31a863af9898590169a050b61ea665a09008eb2147e6c.scope: Deactivated successfully.
Dec  3 02:27:16 compute-0 nova_compute[351485]: 2025-12-03 02:27:16.648 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.797350163 +0000 UTC m=+0.075676998 container create 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.764275389 +0000 UTC m=+0.042602224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:27:16 compute-0 systemd[1]: Started libpod-conmon-8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41.scope.
Dec  3 02:27:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.960693296 +0000 UTC m=+0.239020161 container init 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.984052966 +0000 UTC m=+0.262379801 container start 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:27:16 compute-0 podman[465703]: 2025-12-03 02:27:16.990456077 +0000 UTC m=+0.268782942 container attach 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:27:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]: {
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:    "0": [
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:        {
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "devices": [
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "/dev/loop3"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            ],
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_name": "ceph_lv0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_size": "21470642176",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "name": "ceph_lv0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "tags": {
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cluster_name": "ceph",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.crush_device_class": "",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.encrypted": "0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osd_id": "0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.type": "block",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.vdo": "0"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            },
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "type": "block",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "vg_name": "ceph_vg0"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:        }
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:    ],
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:    "1": [
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:        {
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "devices": [
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "/dev/loop4"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            ],
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_name": "ceph_lv1",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_size": "21470642176",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "name": "ceph_lv1",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "tags": {
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cluster_name": "ceph",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.crush_device_class": "",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.encrypted": "0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osd_id": "1",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.type": "block",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.vdo": "0"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            },
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "type": "block",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "vg_name": "ceph_vg1"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:        }
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:    ],
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:    "2": [
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:        {
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "devices": [
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "/dev/loop5"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            ],
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_name": "ceph_lv2",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_size": "21470642176",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "name": "ceph_lv2",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "tags": {
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.cluster_name": "ceph",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.crush_device_class": "",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.encrypted": "0",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osd_id": "2",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.type": "block",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:                "ceph.vdo": "0"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            },
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "type": "block",
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:            "vg_name": "ceph_vg2"
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:        }
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]:    ]
Dec  3 02:27:17 compute-0 suspicious_dubinsky[465719]: }
Dec  3 02:27:17 compute-0 systemd[1]: libpod-8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41.scope: Deactivated successfully.
Dec  3 02:27:17 compute-0 podman[465703]: 2025-12-03 02:27:17.898123021 +0000 UTC m=+1.176449846 container died 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-656a16d96209ec6ebde10805b52360107cb5701298778643649033fadb9a1916-merged.mount: Deactivated successfully.
Dec  3 02:27:18 compute-0 podman[465703]: 2025-12-03 02:27:18.072397283 +0000 UTC m=+1.350724118 container remove 8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:27:18 compute-0 systemd[1]: libpod-conmon-8ec33575ae577ae628adccad1333a4841a9eca9fc5f19ae548649d11a8a6df41.scope: Deactivated successfully.
Dec  3 02:27:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:19 compute-0 nova_compute[351485]: 2025-12-03 02:27:19.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.144002987 +0000 UTC m=+0.090097605 container create 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.110139311 +0000 UTC m=+0.056233979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:27:19 compute-0 systemd[1]: Started libpod-conmon-5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a.scope.
Dec  3 02:27:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.274112672 +0000 UTC m=+0.220207310 container init 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.289885047 +0000 UTC m=+0.235979625 container start 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.294894279 +0000 UTC m=+0.240988897 container attach 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 02:27:19 compute-0 thirsty_agnesi[465894]: 167 167
Dec  3 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.29919378 +0000 UTC m=+0.245288368 container died 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 02:27:19 compute-0 systemd[1]: libpod-5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a.scope: Deactivated successfully.
Dec  3 02:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-56fb6952e35943e91d7213d6395cfc5cb568953b8ac08e3722f314b005f64e35-merged.mount: Deactivated successfully.
Dec  3 02:27:19 compute-0 podman[465879]: 2025-12-03 02:27:19.368982491 +0000 UTC m=+0.315077059 container remove 5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:27:19 compute-0 systemd[1]: libpod-conmon-5b04e0c6fe63dd836a560a9318bd818ab3bf6a3ee5f11913af95003047e3936a.scope: Deactivated successfully.
Dec  3 02:27:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.514 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.515 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.528 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.531 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.534 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.538 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56146b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.554 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.555 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.558 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:27:19.555918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 nova_compute[351485]: 2025-12-03 02:27:19.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.599 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 43.55859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.636 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 42.43359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.637 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:27:19.638614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.642905477 +0000 UTC m=+0.078298292 container create 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.649 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.656 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.657 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:27:19.657315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:27:19.658901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.660 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.661 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:27:19.660420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:27:19.661813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:27:19.663059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 systemd[1]: Started libpod-conmon-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope.
Dec  3 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.608212357 +0000 UTC m=+0.043605222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.706 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.709 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:27:19.708404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.775634826 +0000 UTC m=+0.211027671 container init 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.788 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 30149632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.789 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.793393017 +0000 UTC m=+0.228785842 container start 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:27:19 compute-0 podman[465919]: 2025-12-03 02:27:19.799358216 +0000 UTC m=+0.234751081 container attach 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.832 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.833 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:27:19.834875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.836 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3251057957 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.837 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 228292831 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.837 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.837 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:27:19.836597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.839 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.840 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:27:19.839060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.842 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.842 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.842 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.843 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.844 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:27:19.841722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:27:19.844436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.845 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.846 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.847 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 72830976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:27:19.847279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.848 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.848 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.849 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 8629084086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:27:19.850812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.851 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10465171027 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.852 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.853 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:27:19.853404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.854 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:27:19.856051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.856 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 290740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 336690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:27:19.857829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:27:19.859678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.861 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:27:19.861442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.863 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:27:19.863072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.864 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.865 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:27:19.865167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:27:19.866952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:27:19.868177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:27:19.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]: {
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "osd_id": 2,
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "type": "bluestore"
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:    },
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "osd_id": 1,
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "type": "bluestore"
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:    },
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "osd_id": 0,
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:        "type": "bluestore"
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]:    }
Dec  3 02:27:20 compute-0 thirsty_shannon[465934]: }
Dec  3 02:27:21 compute-0 systemd[1]: libpod-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope: Deactivated successfully.
Dec  3 02:27:21 compute-0 systemd[1]: libpod-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope: Consumed 1.213s CPU time.
Dec  3 02:27:21 compute-0 podman[465967]: 2025-12-03 02:27:21.120208348 +0000 UTC m=+0.069761261 container died 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-91bf95015077c22adaeb7980209a4357d7002a4677be7361e3b7ea2842a32168-merged.mount: Deactivated successfully.
Dec  3 02:27:21 compute-0 podman[465967]: 2025-12-03 02:27:21.225498462 +0000 UTC m=+0.175051315 container remove 24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:27:21 compute-0 systemd[1]: libpod-conmon-24caf5e4f7bc8fd28f85dfcdb223a2176ea360e6c2d6b8861fd702cedf8f5a40.scope: Deactivated successfully.
Dec  3 02:27:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:27:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:21 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:27:21 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9e8ad0c4-9456-458e-84a5-0a45f790ddea does not exist
Dec  3 02:27:21 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0e404682-11f3-4b41-a7cd-a78409d4a876 does not exist
Dec  3 02:27:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:21 compute-0 nova_compute[351485]: 2025-12-03 02:27:21.652 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:22 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:27:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:24 compute-0 nova_compute[351485]: 2025-12-03 02:27:24.060 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:26 compute-0 nova_compute[351485]: 2025-12-03 02:27:26.657 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:26 compute-0 podman[466037]: 2025-12-03 02:27:26.879958134 +0000 UTC m=+0.119010582 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:27:26 compute-0 podman[466035]: 2025-12-03 02:27:26.904429845 +0000 UTC m=+0.143191255 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:27:26 compute-0 podman[466036]: 2025-12-03 02:27:26.917426952 +0000 UTC m=+0.155690578 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Dec  3 02:27:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.624 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.625 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.626 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:27:27 compute-0 nova_compute[351485]: 2025-12-03 02:27:27.626 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:27:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:27:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290083264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.167 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.293 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.294 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.302 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.303 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:27:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:27:28
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'backups', 'volumes', 'default.rgw.control']
Dec  3 02:27:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.716 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.718 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3502MB free_disk=59.897193908691406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.718 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.719 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.913 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.914 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.915 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:27:28 compute-0 nova_compute[351485]: 2025-12-03 02:27:28.915 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.060 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.110 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:27:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:27:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280635965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.637 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:27:29 compute-0 podman[158098]: time="2025-12-03T02:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:27:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8670 "" "Go-http-client/1.1"
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.864 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.869 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.870 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.872 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.873 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:27:29 compute-0 nova_compute[351485]: 2025-12-03 02:27:29.906 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:27:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:27:31 compute-0 openstack_network_exporter[368278]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:27:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:27:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:31 compute-0 nova_compute[351485]: 2025-12-03 02:27:31.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:31 compute-0 nova_compute[351485]: 2025-12-03 02:27:31.907 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:31 compute-0 nova_compute[351485]: 2025-12-03 02:27:31.908 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:27:32 compute-0 nova_compute[351485]: 2025-12-03 02:27:32.360 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:27:32 compute-0 nova_compute[351485]: 2025-12-03 02:27:32.363 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:27:32 compute-0 nova_compute[351485]: 2025-12-03 02:27:32.364 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:27:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.280 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.307 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.308 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.309 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.310 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:34 compute-0 nova_compute[351485]: 2025-12-03 02:27:34.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:34 compute-0 podman[466136]: 2025-12-03 02:27:34.911359563 +0000 UTC m=+0.165244108 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:27:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:35 compute-0 nova_compute[351485]: 2025-12-03 02:27:35.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:36 compute-0 nova_compute[351485]: 2025-12-03 02:27:36.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:36 compute-0 nova_compute[351485]: 2025-12-03 02:27:36.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:37 compute-0 podman[466170]: 2025-12-03 02:27:37.892713611 +0000 UTC m=+0.107971009 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 02:27:37 compute-0 podman[466158]: 2025-12-03 02:27:37.899108191 +0000 UTC m=+0.137753090 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 02:27:37 compute-0 podman[466157]: 2025-12-03 02:27:37.913436076 +0000 UTC m=+0.157313643 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec  3 02:27:37 compute-0 podman[466159]: 2025-12-03 02:27:37.918041376 +0000 UTC m=+0.155764159 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:27:37 compute-0 podman[466160]: 2025-12-03 02:27:37.918079187 +0000 UTC m=+0.130646399 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, architecture=x86_64, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  3 02:27:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518418921338803 of space, bias 1.0, pg target 0.45552567640164093 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:27:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:27:39 compute-0 nova_compute[351485]: 2025-12-03 02:27:39.069 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:39 compute-0 nova_compute[351485]: 2025-12-03 02:27:39.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:39 compute-0 nova_compute[351485]: 2025-12-03 02:27:39.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:27:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 9935 writes, 45K keys, 9935 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 9935 writes, 9935 syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1341 writes, 6318 keys, 1341 commit groups, 1.0 writes per commit group, ingest: 8.77 MB, 0.01 MB/s#012Interval WAL: 1341 writes, 1341 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     97.4      0.57              0.26        31    0.018       0      0       0.0       0.0#012  L6      1/0    6.18 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    132.5    108.8      2.09              1.03        30    0.070    160K    16K       0.0       0.0#012 Sum      1/0    6.18 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    104.1    106.4      2.66              1.29        61    0.044    160K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.4    101.1     98.0      0.54              0.28        12    0.045     37K   3096       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    132.5    108.8      2.09              1.03        30    0.070    160K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     97.8      0.57              0.26        30    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.054, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.07 MB/s write, 0.27 GB read, 0.07 MB/s read, 2.7 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 308.00 MB usage: 32.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000326 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2091,31.20 MB,10.1287%) FilterBlock(62,452.30 KB,0.143408%) IndexBlock(62,749.83 KB,0.237745%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 02:27:40 compute-0 nova_compute[351485]: 2025-12-03 02:27:40.599 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:27:40 compute-0 nova_compute[351485]: 2025-12-03 02:27:40.600 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:27:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:41 compute-0 nova_compute[351485]: 2025-12-03 02:27:41.665 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:44 compute-0 nova_compute[351485]: 2025-12-03 02:27:44.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:46 compute-0 nova_compute[351485]: 2025-12-03 02:27:46.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:27:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924182898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:27:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:27:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924182898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:27:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:49 compute-0 nova_compute[351485]: 2025-12-03 02:27:49.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:51 compute-0 nova_compute[351485]: 2025-12-03 02:27:51.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:54 compute-0 nova_compute[351485]: 2025-12-03 02:27:54.078 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:56 compute-0 nova_compute[351485]: 2025-12-03 02:27:56.676 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:57 compute-0 podman[466260]: 2025-12-03 02:27:57.885053479 +0000 UTC m=+0.110315187 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:27:57 compute-0 podman[466258]: 2025-12-03 02:27:57.909988173 +0000 UTC m=+0.150198633 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:27:57 compute-0 podman[466259]: 2025-12-03 02:27:57.942851331 +0000 UTC m=+0.180486118 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec  3 02:27:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:27:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:27:59 compute-0 nova_compute[351485]: 2025-12-03 02:27:59.081 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:27:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:27:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:27:59.662 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:27:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:27:59.662 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:27:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:27:59.663 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:27:59 compute-0 podman[158098]: time="2025-12-03T02:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:27:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  3 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:28:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:28:01 compute-0 openstack_network_exporter[368278]: ERROR   02:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:28:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:28:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:28:01 compute-0 nova_compute[351485]: 2025-12-03 02:28:01.679 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:28:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:04 compute-0 nova_compute[351485]: 2025-12-03 02:28:04.083 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 170 B/s wr, 1 op/s
Dec  3 02:28:05 compute-0 podman[466315]: 2025-12-03 02:28:05.884051774 +0000 UTC m=+0.129574940 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:28:06 compute-0 nova_compute[351485]: 2025-12-03 02:28:06.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:06 compute-0 nova_compute[351485]: 2025-12-03 02:28:06.682 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec  3 02:28:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:08 compute-0 podman[466335]: 2025-12-03 02:28:08.8918906 +0000 UTC m=+0.115398641 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:28:08 compute-0 podman[466343]: 2025-12-03 02:28:08.902659024 +0000 UTC m=+0.104627026 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:28:08 compute-0 podman[466333]: 2025-12-03 02:28:08.90818831 +0000 UTC m=+0.147749174 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:28:08 compute-0 podman[466334]: 2025-12-03 02:28:08.9124483 +0000 UTC m=+0.141164207 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 02:28:08 compute-0 podman[466336]: 2025-12-03 02:28:08.917928465 +0000 UTC m=+0.130063804 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, name=ubi9, config_id=edpm, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 02:28:09 compute-0 nova_compute[351485]: 2025-12-03 02:28:09.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec  3 02:28:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 02:28:11 compute-0 nova_compute[351485]: 2025-12-03 02:28:11.685 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 02:28:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:14 compute-0 nova_compute[351485]: 2025-12-03 02:28:14.092 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  3 02:28:16 compute-0 nova_compute[351485]: 2025-12-03 02:28:16.688 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 8.4 KiB/s wr, 3 op/s
Dec  3 02:28:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:19 compute-0 nova_compute[351485]: 2025-12-03 02:28:19.095 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec  3 02:28:19 compute-0 nova_compute[351485]: 2025-12-03 02:28:19.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Dec  3 02:28:21 compute-0 nova_compute[351485]: 2025-12-03 02:28:21.691 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:28:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4ecd5193-066b-4f4c-b7ce-42f0dd320441 does not exist
Dec  3 02:28:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f39ccec4-69a2-4acc-984a-dbad9e4fcbcb does not exist
Dec  3 02:28:22 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e267c131-aa21-4f6d-b912-7606931d33c2 does not exist
Dec  3 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:28:22 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:28:22 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:28:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Dec  3 02:28:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:28:23 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.043820142 +0000 UTC m=+0.070911554 container create 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 02:28:24 compute-0 nova_compute[351485]: 2025-12-03 02:28:24.097 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.007419264 +0000 UTC m=+0.034510766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:28:24 compute-0 systemd[1]: Started libpod-conmon-22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309.scope.
Dec  3 02:28:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.212023582 +0000 UTC m=+0.239115084 container init 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.230142734 +0000 UTC m=+0.257234186 container start 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.238027576 +0000 UTC m=+0.265119028 container attach 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:28:24 compute-0 inspiring_gates[466717]: 167 167
Dec  3 02:28:24 compute-0 systemd[1]: libpod-22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309.scope: Deactivated successfully.
Dec  3 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.246161936 +0000 UTC m=+0.273253388 container died 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d4afeb7540fed811c0b8a9a8cbc889afe23875518104e3421336c2132cffd4e-merged.mount: Deactivated successfully.
Dec  3 02:28:24 compute-0 podman[466701]: 2025-12-03 02:28:24.319696913 +0000 UTC m=+0.346788355 container remove 22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gates, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:28:24 compute-0 systemd[1]: libpod-conmon-22efcef3faa5439f2908f52abe9fa5e04b447673b283ff2401e8c6277d1d9309.scope: Deactivated successfully.
Dec  3 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.622090272 +0000 UTC m=+0.083836129 container create 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.586099486 +0000 UTC m=+0.047845393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:28:24 compute-0 systemd[1]: Started libpod-conmon-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope.
Dec  3 02:28:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.835863669 +0000 UTC m=+0.297609576 container init 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.845614615 +0000 UTC m=+0.307360452 container start 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 02:28:24 compute-0 podman[466739]: 2025-12-03 02:28:24.85145377 +0000 UTC m=+0.313199607 container attach 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:28:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 341 B/s wr, 0 op/s
Dec  3 02:28:26 compute-0 fervent_leavitt[466756]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:28:26 compute-0 fervent_leavitt[466756]: --> relative data size: 1.0
Dec  3 02:28:26 compute-0 fervent_leavitt[466756]: --> All data devices are unavailable
Dec  3 02:28:26 compute-0 systemd[1]: libpod-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope: Deactivated successfully.
Dec  3 02:28:26 compute-0 podman[466739]: 2025-12-03 02:28:26.24619212 +0000 UTC m=+1.707937977 container died 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:28:26 compute-0 systemd[1]: libpod-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope: Consumed 1.325s CPU time.
Dec  3 02:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-55deec32efe2af1db4dba36866cedcd1273606f8ef6a07e2ef5ae9c0775afe90-merged.mount: Deactivated successfully.
Dec  3 02:28:26 compute-0 podman[466739]: 2025-12-03 02:28:26.348007365 +0000 UTC m=+1.809753192 container remove 031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 02:28:26 compute-0 systemd[1]: libpod-conmon-031c160e40bd4ef287ef1ca836a20d5d0e2c96b89f4462373a63bd02dfd19cb9.scope: Deactivated successfully.
Dec  3 02:28:26 compute-0 nova_compute[351485]: 2025-12-03 02:28:26.694 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.746384368 +0000 UTC m=+0.070199343 container create d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.720446966 +0000 UTC m=+0.044261951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:28:27 compute-0 systemd[1]: Started libpod-conmon-d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740.scope.
Dec  3 02:28:27 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.899211694 +0000 UTC m=+0.223026679 container init d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec  3 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.918135818 +0000 UTC m=+0.241950793 container start d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.923661174 +0000 UTC m=+0.247476149 container attach d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:28:27 compute-0 sharp_wright[466947]: 167 167
Dec  3 02:28:27 compute-0 systemd[1]: libpod-d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740.scope: Deactivated successfully.
Dec  3 02:28:27 compute-0 podman[466931]: 2025-12-03 02:28:27.931703061 +0000 UTC m=+0.255518076 container died d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1b54c79e48804c76f3bdd39b9a5db52a56f9ac37b16b8abfa127489983d0360-merged.mount: Deactivated successfully.
Dec  3 02:28:28 compute-0 podman[466931]: 2025-12-03 02:28:28.014849569 +0000 UTC m=+0.338664514 container remove d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:28:28 compute-0 systemd[1]: libpod-conmon-d68bc6e759493ff28bd6e89682818ff1cd8631ee73efb093c4c9f9e309434740.scope: Deactivated successfully.
Dec  3 02:28:28 compute-0 podman[466954]: 2025-12-03 02:28:28.067925978 +0000 UTC m=+0.095872228 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:28:28 compute-0 podman[466964]: 2025-12-03 02:28:28.093750077 +0000 UTC m=+0.098061100 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  3 02:28:28 compute-0 podman[466957]: 2025-12-03 02:28:28.101075994 +0000 UTC m=+0.117327444 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.241713856 +0000 UTC m=+0.073787815 container create a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  3 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.210174746 +0000 UTC m=+0.042248755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:28:28 compute-0 systemd[1]: Started libpod-conmon-a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e.scope.
Dec  3 02:28:28 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.427076421 +0000 UTC m=+0.259150430 container init a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.447698574 +0000 UTC m=+0.279772533 container start a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:28:28 compute-0 podman[467029]: 2025-12-03 02:28:28.455250197 +0000 UTC m=+0.287324226 container attach a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:28:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:28:28
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups']
Dec  3 02:28:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:28:28 compute-0 nova_compute[351485]: 2025-12-03 02:28:28.613 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.100 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:29 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:28:29 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376880488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.196 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]: {
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:    "0": [
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:        {
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "devices": [
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "/dev/loop3"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            ],
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_name": "ceph_lv0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_size": "21470642176",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "name": "ceph_lv0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "tags": {
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cluster_name": "ceph",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.crush_device_class": "",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.encrypted": "0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osd_id": "0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.type": "block",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.vdo": "0"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            },
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "type": "block",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "vg_name": "ceph_vg0"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:        }
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:    ],
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:    "1": [
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:        {
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "devices": [
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "/dev/loop4"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            ],
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_name": "ceph_lv1",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_size": "21470642176",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "name": "ceph_lv1",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "tags": {
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cluster_name": "ceph",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.crush_device_class": "",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.encrypted": "0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osd_id": "1",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.type": "block",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.vdo": "0"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            },
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "type": "block",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "vg_name": "ceph_vg1"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:        }
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:    ],
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:    "2": [
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:        {
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "devices": [
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "/dev/loop5"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            ],
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_name": "ceph_lv2",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_size": "21470642176",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "name": "ceph_lv2",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "tags": {
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.cluster_name": "ceph",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.crush_device_class": "",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.encrypted": "0",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osd_id": "2",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.type": "block",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:                "ceph.vdo": "0"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            },
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "type": "block",
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:            "vg_name": "ceph_vg2"
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:        }
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]:    ]
Dec  3 02:28:29 compute-0 affectionate_ptolemy[467045]: }
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.293 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.293 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.300 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.300 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:28:29 compute-0 systemd[1]: libpod-a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e.scope: Deactivated successfully.
Dec  3 02:28:29 compute-0 podman[467029]: 2025-12-03 02:28:29.328498139 +0000 UTC m=+1.160572128 container died a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4eb2f438ad1b8d80ed0715e3488dc22ff31c8e1369f9c00849cea7d5fd7c87c-merged.mount: Deactivated successfully.
Dec  3 02:28:29 compute-0 podman[467029]: 2025-12-03 02:28:29.438073764 +0000 UTC m=+1.270147693 container remove a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 02:28:29 compute-0 systemd[1]: libpod-conmon-a4454b585f51d206170b072a9189b346f7f8cf4810d796a17b43f7bc2c74033e.scope: Deactivated successfully.
Dec  3 02:28:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 02:28:29 compute-0 podman[158098]: time="2025-12-03T02:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.769 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.770 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3436MB free_disk=59.89701461791992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.770 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.771 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:28:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8676 "" "Go-http-client/1.1"
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.874 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.874 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.875 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.875 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:28:29 compute-0 nova_compute[351485]: 2025-12-03 02:28:29.949 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:28:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:28:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213844537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.402 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.419 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.495032204 +0000 UTC m=+0.089619382 container create 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.546 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.548 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:28:30 compute-0 nova_compute[351485]: 2025-12-03 02:28:30.548 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.46052339 +0000 UTC m=+0.055110588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:28:30 compute-0 systemd[1]: Started libpod-conmon-1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd.scope.
Dec  3 02:28:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.651297568 +0000 UTC m=+0.245884806 container init 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.669341187 +0000 UTC m=+0.263928375 container start 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.67864066 +0000 UTC m=+0.273227908 container attach 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:28:30 compute-0 trusting_wright[467265]: 167 167
Dec  3 02:28:30 compute-0 systemd[1]: libpod-1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd.scope: Deactivated successfully.
Dec  3 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.684211787 +0000 UTC m=+0.278798975 container died 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff70ff64e0e922ed52ab748117215237ce5ff1e3d6ea5e1c902e4c634a44f758-merged.mount: Deactivated successfully.
Dec  3 02:28:30 compute-0 podman[467248]: 2025-12-03 02:28:30.765071681 +0000 UTC m=+0.359658839 container remove 1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:28:30 compute-0 systemd[1]: libpod-conmon-1670db05d651a14c09aef209fb74c64f9b18b84a5fd0b6139346363422be37bd.scope: Deactivated successfully.
Dec  3 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.05469083 +0000 UTC m=+0.100338475 container create 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  3 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.020273398 +0000 UTC m=+0.065921043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:28:31 compute-0 systemd[1]: Started libpod-conmon-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope.
Dec  3 02:28:31 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.217004084 +0000 UTC m=+0.262651699 container init 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.236635769 +0000 UTC m=+0.282283384 container start 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 02:28:31 compute-0 podman[467287]: 2025-12-03 02:28:31.241168697 +0000 UTC m=+0.286816312 container attach 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:28:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:28:31 compute-0 openstack_network_exporter[368278]: ERROR   02:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:28:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:28:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Dec  3 02:28:31 compute-0 nova_compute[351485]: 2025-12-03 02:28:31.698 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]: {
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "osd_id": 2,
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "type": "bluestore"
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:    },
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "osd_id": 1,
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "type": "bluestore"
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:    },
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "osd_id": 0,
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:        "type": "bluestore"
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]:    }
Dec  3 02:28:32 compute-0 unruffled_agnesi[467302]: }
Dec  3 02:28:32 compute-0 systemd[1]: libpod-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope: Deactivated successfully.
Dec  3 02:28:32 compute-0 systemd[1]: libpod-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope: Consumed 1.252s CPU time.
Dec  3 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.548 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.549 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.549 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:28:32 compute-0 podman[467335]: 2025-12-03 02:28:32.57197995 +0000 UTC m=+0.048721877 container died 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-30004a6e4e20f26e7a6c4f166d95e3eb88b8be8b8804302e8f31030b377ac046-merged.mount: Deactivated successfully.
Dec  3 02:28:32 compute-0 podman[467335]: 2025-12-03 02:28:32.694259804 +0000 UTC m=+0.171001681 container remove 717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_agnesi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:28:32 compute-0 systemd[1]: libpod-conmon-717b4b98726c8b55165cac878b5c916ba2fcb06b9b19fe9686d990fcd22fbd66.scope: Deactivated successfully.
Dec  3 02:28:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:28:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:28:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:28:32 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:28:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev eddbd822-ef5c-4699-9b27-b80182c76f92 does not exist
Dec  3 02:28:32 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 13e22447-67d2-4abe-b939-cc2769d0d612 does not exist
Dec  3 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.981 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.982 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.982 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:28:32 compute-0 nova_compute[351485]: 2025-12-03 02:28:32.983 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:28:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  3 02:28:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:28:33 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.102 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.494 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.517 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.517 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.519 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.520 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:34 compute-0 nova_compute[351485]: 2025-12-03 02:28:34.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  3 02:28:36 compute-0 nova_compute[351485]: 2025-12-03 02:28:36.702 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:36 compute-0 podman[467399]: 2025-12-03 02:28:36.899670552 +0000 UTC m=+0.145655874 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:28:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec  3 02:28:37 compute-0 nova_compute[351485]: 2025-12-03 02:28:37.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:37 compute-0 nova_compute[351485]: 2025-12-03 02:28:37.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:38 compute-0 nova_compute[351485]: 2025-12-03 02:28:38.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:38 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015214076846063684 of space, bias 1.0, pg target 0.45642230538191053 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:28:39 compute-0 nova_compute[351485]: 2025-12-03 02:28:39.105 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  3 02:28:39 compute-0 podman[467421]: 2025-12-03 02:28:39.879380674 +0000 UTC m=+0.113238349 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:28:39 compute-0 podman[467420]: 2025-12-03 02:28:39.888209224 +0000 UTC m=+0.125443794 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:28:39 compute-0 podman[467422]: 2025-12-03 02:28:39.895768117 +0000 UTC m=+0.119981870 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, name=ubi9, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 02:28:39 compute-0 podman[467419]: 2025-12-03 02:28:39.909650679 +0000 UTC m=+0.153209518 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  3 02:28:39 compute-0 podman[467424]: 2025-12-03 02:28:39.910785401 +0000 UTC m=+0.131052732 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:28:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:28:41 compute-0 nova_compute[351485]: 2025-12-03 02:28:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:28:41 compute-0 nova_compute[351485]: 2025-12-03 02:28:41.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:28:41 compute-0 nova_compute[351485]: 2025-12-03 02:28:41.705 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:28:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:44 compute-0 nova_compute[351485]: 2025-12-03 02:28:44.109 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:28:46 compute-0 nova_compute[351485]: 2025-12-03 02:28:46.708 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203230717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:28:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:28:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203230717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:28:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:28:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:49 compute-0 nova_compute[351485]: 2025-12-03 02:28:49.112 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  3 02:28:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec  3 02:28:51 compute-0 nova_compute[351485]: 2025-12-03 02:28:51.711 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:28:54 compute-0 nova_compute[351485]: 2025-12-03 02:28:54.118 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:28:56 compute-0 nova_compute[351485]: 2025-12-03 02:28:56.715 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:28:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:28:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:28:58 compute-0 podman[467524]: 2025-12-03 02:28:58.892999323 +0000 UTC m=+0.123795317 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:28:58 compute-0 podman[467523]: 2025-12-03 02:28:58.899895918 +0000 UTC m=+0.134247292 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:28:58 compute-0 podman[467522]: 2025-12-03 02:28:58.910125717 +0000 UTC m=+0.149269977 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  3 02:28:59 compute-0 nova_compute[351485]: 2025-12-03 02:28:59.119 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:28:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:28:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:28:59.663 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:28:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:28:59.664 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:28:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:28:59.665 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:28:59 compute-0 podman[158098]: time="2025-12-03T02:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:28:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8669 "" "Go-http-client/1.1"
Dec  3 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:29:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:29:01 compute-0 openstack_network_exporter[368278]: ERROR   02:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:29:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:29:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:29:01 compute-0 nova_compute[351485]: 2025-12-03 02:29:01.718 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:29:04 compute-0 nova_compute[351485]: 2025-12-03 02:29:04.122 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  3 02:29:06 compute-0 nova_compute[351485]: 2025-12-03 02:29:06.724 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:07 compute-0 podman[467581]: 2025-12-03 02:29:07.876227958 +0000 UTC m=+0.127097998 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 02:29:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:09 compute-0 nova_compute[351485]: 2025-12-03 02:29:09.126 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:10 compute-0 podman[467601]: 2025-12-03 02:29:10.865712301 +0000 UTC m=+0.110111197 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:29:10 compute-0 podman[467600]: 2025-12-03 02:29:10.872092791 +0000 UTC m=+0.114102770 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 02:29:10 compute-0 podman[467602]: 2025-12-03 02:29:10.879518071 +0000 UTC m=+0.107264677 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_id=edpm, distribution-scope=public)
Dec  3 02:29:10 compute-0 podman[467599]: 2025-12-03 02:29:10.890099869 +0000 UTC m=+0.142509621 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  3 02:29:10 compute-0 podman[467607]: 2025-12-03 02:29:10.908354404 +0000 UTC m=+0.127830907 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  3 02:29:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:11 compute-0 nova_compute[351485]: 2025-12-03 02:29:11.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:14 compute-0 nova_compute[351485]: 2025-12-03 02:29:14.129 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:16 compute-0 nova_compute[351485]: 2025-12-03 02:29:16.731 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:19 compute-0 nova_compute[351485]: 2025-12-03 02:29:19.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.514 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.515 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.515 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5617980>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.530 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.536 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.536 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:29:19.537648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.583 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 42.39453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.627 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 41.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.628 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:29:19.629455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.635 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.643 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.646 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.646 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:29:19.645692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.648 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.649 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:29:19.649169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.650 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.652 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.652 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:29:19.652122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.655 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.655 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:29:19.654849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:29:19.657839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.681 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.682 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.701 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.702 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:29:19.704804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.761 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.762 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.820 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.821 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.822 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:29:19.822467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.823 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3352022930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 250801539 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.824 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.825 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:29:19.824215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.826 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.827 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.827 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:29:19.826434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:29:19.828581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.830 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.831 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:29:19.830069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 73138176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.832 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.833 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.833 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:29:19.832454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.834 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 9097731540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.835 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.835 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10465171027 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.835 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:29:19.834709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:29:19.837234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:29:19.839376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 335420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 338610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.842 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.843 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:29:19.840941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:29:19.842382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.845 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.847 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:29:19.843859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:29:19.845297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:29:19.847363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:29:19.848826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:29:19.850008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.851 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:29:19.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:29:20 compute-0 nova_compute[351485]: 2025-12-03 02:29:20.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:21 compute-0 nova_compute[351485]: 2025-12-03 02:29:21.734 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:24 compute-0 nova_compute[351485]: 2025-12-03 02:29:24.135 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.601149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964601193, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1942, "num_deletes": 251, "total_data_size": 3241966, "memory_usage": 3291648, "flush_reason": "Manual Compaction"}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964627695, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3177366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44356, "largest_seqno": 46297, "table_properties": {"data_size": 3168460, "index_size": 5592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17709, "raw_average_key_size": 20, "raw_value_size": 3150849, "raw_average_value_size": 3560, "num_data_blocks": 249, "num_entries": 885, "num_filter_entries": 885, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728748, "oldest_key_time": 1764728748, "file_creation_time": 1764728964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 27185 microseconds, and 14890 cpu microseconds.
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.628322) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3177366 bytes OK
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.629044) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.631273) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.631292) EVENT_LOG_v1 {"time_micros": 1764728964631285, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.631308) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3233809, prev total WAL file size 3233809, number of live WAL files 2.
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.633092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3102KB)], [107(6330KB)]
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964633150, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9659433, "oldest_snapshot_seqno": -1}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6089 keys, 7919555 bytes, temperature: kUnknown
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964689743, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7919555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7881626, "index_size": 21627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 158542, "raw_average_key_size": 26, "raw_value_size": 7774163, "raw_average_value_size": 1276, "num_data_blocks": 857, "num_entries": 6089, "num_filter_entries": 6089, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764728964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.689951) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7919555 bytes
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.691961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.5 rd, 139.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.2 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 6603, records dropped: 514 output_compression: NoCompression
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.691980) EVENT_LOG_v1 {"time_micros": 1764728964691971, "job": 64, "event": "compaction_finished", "compaction_time_micros": 56660, "compaction_time_cpu_micros": 26968, "output_level": 6, "num_output_files": 1, "total_output_size": 7919555, "num_input_records": 6603, "num_output_records": 6089, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964692770, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764728964694382, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.632594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:29:24 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:29:24.694771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:29:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:26 compute-0 nova_compute[351485]: 2025-12-03 02:29:26.736 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:29:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:29:28
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', '.mgr', 'vms']
Dec  3 02:29:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.139 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:29:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.611 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:29:29 compute-0 nova_compute[351485]: 2025-12-03 02:29:29.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:29:29 compute-0 podman[158098]: time="2025-12-03T02:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:29:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8669 "" "Go-http-client/1.1"
Dec  3 02:29:29 compute-0 podman[467707]: 2025-12-03 02:29:29.841344902 +0000 UTC m=+0.096263358 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:29:29 compute-0 podman[467708]: 2025-12-03 02:29:29.864433823 +0000 UTC m=+0.106736943 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  3 02:29:29 compute-0 podman[467709]: 2025-12-03 02:29:29.901841139 +0000 UTC m=+0.125785691 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:29:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:29:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774826908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.119 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.240 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.240 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.250 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.250 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.819 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.821 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.897010803222656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.821 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.822 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.931 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.932 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.932 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:29:30 compute-0 nova_compute[351485]: 2025-12-03 02:29:30.933 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.011 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:29:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:29:31 compute-0 openstack_network_exporter[368278]: ERROR   02:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:29:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:29:31 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:29:31 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998550311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.473 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.488 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:29:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.518 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.522 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.523 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:29:31 compute-0 nova_compute[351485]: 2025-12-03 02:29:31.741 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:34 compute-0 nova_compute[351485]: 2025-12-03 02:29:34.141 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:34 compute-0 nova_compute[351485]: 2025-12-03 02:29:34.523 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:34 compute-0 nova_compute[351485]: 2025-12-03 02:29:34.524 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:29:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b37dd645-7f74-4429-a7e1-8f591d4bfd7c does not exist
Dec  3 02:29:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 741949d4-965f-4d6e-824b-a4f615b23562 does not exist
Dec  3 02:29:34 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a1eeb915-513b-4756-b264-54071b5f0556 does not exist
Dec  3 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:29:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:29:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:29:34 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:29:35 compute-0 nova_compute[351485]: 2025-12-03 02:29:35.020 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:29:35 compute-0 nova_compute[351485]: 2025-12-03 02:29:35.020 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:29:35 compute-0 nova_compute[351485]: 2025-12-03 02:29:35.021 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:29:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:35 compute-0 podman[468075]: 2025-12-03 02:29:35.845664352 +0000 UTC m=+0.093956413 container create a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:29:35 compute-0 podman[468075]: 2025-12-03 02:29:35.815274614 +0000 UTC m=+0.063566685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:29:35 compute-0 systemd[1]: Started libpod-conmon-a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee.scope.
Dec  3 02:29:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.040935381 +0000 UTC m=+0.289227502 container init a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.059468694 +0000 UTC m=+0.307760755 container start a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.067140851 +0000 UTC m=+0.315432972 container attach a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:29:36 compute-0 compassionate_meitner[468091]: 167 167
Dec  3 02:29:36 compute-0 systemd[1]: libpod-a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee.scope: Deactivated successfully.
Dec  3 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.075220539 +0000 UTC m=+0.323512620 container died a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7eed382ad3b93035c50df1047118ef6062a15c548ea6388e0eeb69de7073c17-merged.mount: Deactivated successfully.
Dec  3 02:29:36 compute-0 podman[468075]: 2025-12-03 02:29:36.171963339 +0000 UTC m=+0.420255410 container remove a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:29:36 compute-0 systemd[1]: libpod-conmon-a7638ebe87e93edfd47f3a8a4f953d0d7e6ffd328e71f1abe594ced5b751bfee.scope: Deactivated successfully.
Dec  3 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.323 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.341 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.341 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.342 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.342 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.343 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.446828215 +0000 UTC m=+0.076420297 container create 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.420482352 +0000 UTC m=+0.050074424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:29:36 compute-0 systemd[1]: Started libpod-conmon-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope.
Dec  3 02:29:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.599233596 +0000 UTC m=+0.228825738 container init 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.627287138 +0000 UTC m=+0.256879200 container start 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 02:29:36 compute-0 podman[468114]: 2025-12-03 02:29:36.632133105 +0000 UTC m=+0.261725167 container attach 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:29:36 compute-0 nova_compute[351485]: 2025-12-03 02:29:36.745 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:37 compute-0 eloquent_nobel[468129]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:29:37 compute-0 eloquent_nobel[468129]: --> relative data size: 1.0
Dec  3 02:29:37 compute-0 eloquent_nobel[468129]: --> All data devices are unavailable
Dec  3 02:29:37 compute-0 systemd[1]: libpod-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope: Deactivated successfully.
Dec  3 02:29:37 compute-0 podman[468114]: 2025-12-03 02:29:37.787995163 +0000 UTC m=+1.417587315 container died 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:29:37 compute-0 systemd[1]: libpod-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope: Consumed 1.106s CPU time.
Dec  3 02:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc266e56bd6ab7c84f6e93e53c0d81ceed2beb527c50f16d70443f9b03e19eaf-merged.mount: Deactivated successfully.
Dec  3 02:29:37 compute-0 podman[468114]: 2025-12-03 02:29:37.890041163 +0000 UTC m=+1.519633225 container remove 0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nobel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:29:37 compute-0 systemd[1]: libpod-conmon-0fd3f06407a2039f77ed31137123ec3bc2e9f406a8f1d1268c50b84d7370c17e.scope: Deactivated successfully.
Dec  3 02:29:38 compute-0 podman[468170]: 2025-12-03 02:29:38.079498029 +0000 UTC m=+0.132310475 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:29:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521471275314189 of space, bias 1.0, pg target 0.45644138259425665 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.078524092 +0000 UTC m=+0.081543872 container create be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.048472694 +0000 UTC m=+0.051492484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:29:39 compute-0 nova_compute[351485]: 2025-12-03 02:29:39.145 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:39 compute-0 systemd[1]: Started libpod-conmon-be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18.scope.
Dec  3 02:29:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.236236842 +0000 UTC m=+0.239256612 container init be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.256200036 +0000 UTC m=+0.259219806 container start be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  3 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.265620712 +0000 UTC m=+0.268640542 container attach be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:29:39 compute-0 determined_mccarthy[468347]: 167 167
Dec  3 02:29:39 compute-0 systemd[1]: libpod-be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18.scope: Deactivated successfully.
Dec  3 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.271664782 +0000 UTC m=+0.274684612 container died be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 02:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c93609aeb7020dab022d3a5be9bc25d458af7f25d2b7a8b14759cede45820849-merged.mount: Deactivated successfully.
Dec  3 02:29:39 compute-0 podman[468331]: 2025-12-03 02:29:39.345037953 +0000 UTC m=+0.348057703 container remove be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:29:39 compute-0 systemd[1]: libpod-conmon-be0fbd273874b8052f9c2aa11de0c2d1e87e91bf3b727c6520744a52281bca18.scope: Deactivated successfully.
Dec  3 02:29:39 compute-0 nova_compute[351485]: 2025-12-03 02:29:39.390 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:39 compute-0 nova_compute[351485]: 2025-12-03 02:29:39.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.655168474 +0000 UTC m=+0.097510223 container create e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.616664007 +0000 UTC m=+0.059005806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:29:39 compute-0 systemd[1]: Started libpod-conmon-e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634.scope.
Dec  3 02:29:39 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.864298725 +0000 UTC m=+0.306640504 container init e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.901966188 +0000 UTC m=+0.344307947 container start e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:29:39 compute-0 podman[468371]: 2025-12-03 02:29:39.908997577 +0000 UTC m=+0.351339326 container attach e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]: {
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:    "0": [
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:        {
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "devices": [
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "/dev/loop3"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            ],
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_name": "ceph_lv0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_size": "21470642176",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "name": "ceph_lv0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "tags": {
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cluster_name": "ceph",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.crush_device_class": "",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.encrypted": "0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osd_id": "0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.type": "block",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.vdo": "0"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            },
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "type": "block",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "vg_name": "ceph_vg0"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:        }
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:    ],
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:    "1": [
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:        {
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "devices": [
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "/dev/loop4"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            ],
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_name": "ceph_lv1",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_size": "21470642176",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "name": "ceph_lv1",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "tags": {
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cluster_name": "ceph",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.crush_device_class": "",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.encrypted": "0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osd_id": "1",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.type": "block",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.vdo": "0"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            },
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "type": "block",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "vg_name": "ceph_vg1"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:        }
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:    ],
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:    "2": [
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:        {
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "devices": [
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "/dev/loop5"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            ],
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_name": "ceph_lv2",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_size": "21470642176",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "name": "ceph_lv2",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "tags": {
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.cluster_name": "ceph",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.crush_device_class": "",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.encrypted": "0",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osd_id": "2",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.type": "block",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:                "ceph.vdo": "0"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            },
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "type": "block",
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:            "vg_name": "ceph_vg2"
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:        }
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]:    ]
Dec  3 02:29:40 compute-0 inspiring_kalam[468387]: }
Dec  3 02:29:40 compute-0 systemd[1]: libpod-e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634.scope: Deactivated successfully.
Dec  3 02:29:40 compute-0 podman[468371]: 2025-12-03 02:29:40.790617386 +0000 UTC m=+1.232959145 container died e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:29:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-387a14436f8d2266f56a6157d03d4a15642e8abedc74eab0dd64d2d9b870ce9a-merged.mount: Deactivated successfully.
Dec  3 02:29:40 compute-0 podman[468371]: 2025-12-03 02:29:40.888034745 +0000 UTC m=+1.330376504 container remove e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:29:40 compute-0 systemd[1]: libpod-conmon-e6c67e510a354cfee868a146f4f6ce8adf65ee7c05ffa690d1896c359ae64634.scope: Deactivated successfully.
Dec  3 02:29:41 compute-0 podman[468412]: 2025-12-03 02:29:41.066864772 +0000 UTC m=+0.104153791 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 02:29:41 compute-0 podman[468411]: 2025-12-03 02:29:41.073586031 +0000 UTC m=+0.114550873 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:29:41 compute-0 podman[468413]: 2025-12-03 02:29:41.082295637 +0000 UTC m=+0.114552764 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:29:41 compute-0 podman[468410]: 2025-12-03 02:29:41.10897479 +0000 UTC m=+0.149310415 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Dec  3 02:29:41 compute-0 podman[468409]: 2025-12-03 02:29:41.119407874 +0000 UTC m=+0.161660183 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 02:29:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:41 compute-0 nova_compute[351485]: 2025-12-03 02:29:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:29:41 compute-0 nova_compute[351485]: 2025-12-03 02:29:41.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:29:41 compute-0 nova_compute[351485]: 2025-12-03 02:29:41.749 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:41 compute-0 podman[468651]: 2025-12-03 02:29:41.895621949 +0000 UTC m=+0.101835125 container create 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:29:41 compute-0 podman[468651]: 2025-12-03 02:29:41.852806241 +0000 UTC m=+0.059019457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:29:41 compute-0 systemd[1]: Started libpod-conmon-1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250.scope.
Dec  3 02:29:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.053811743 +0000 UTC m=+0.260024969 container init 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.070817593 +0000 UTC m=+0.277030759 container start 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.077815921 +0000 UTC m=+0.284029097 container attach 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:29:42 compute-0 sleepy_clarke[468667]: 167 167
Dec  3 02:29:42 compute-0 systemd[1]: libpod-1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250.scope: Deactivated successfully.
Dec  3 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.084848419 +0000 UTC m=+0.291061595 container died 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 02:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecbe0678e71fa0f774b2b1b61b85cd3f5e0fa4a2066ee1a76bc087399e41963d-merged.mount: Deactivated successfully.
Dec  3 02:29:42 compute-0 podman[468651]: 2025-12-03 02:29:42.182059202 +0000 UTC m=+0.388272368 container remove 1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:29:42 compute-0 systemd[1]: libpod-conmon-1b02c2f29580b671d15b06b1a465ea9496e9e80398322b017a51a006f2ec2250.scope: Deactivated successfully.
Dec  3 02:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:29:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2810 syncs, 3.67 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 738 writes, 2541 keys, 738 commit groups, 1.0 writes per commit group, ingest: 3.34 MB, 0.01 MB/s#012Interval WAL: 738 writes, 303 syncs, 2.44 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.507839576 +0000 UTC m=+0.113054892 container create c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.466730196 +0000 UTC m=+0.071945562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:29:42 compute-0 systemd[1]: Started libpod-conmon-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope.
Dec  3 02:29:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.675650371 +0000 UTC m=+0.280865657 container init c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.692653461 +0000 UTC m=+0.297868777 container start c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:29:42 compute-0 podman[468688]: 2025-12-03 02:29:42.699445673 +0000 UTC m=+0.304660959 container attach c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 02:29:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]: {
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "osd_id": 2,
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "type": "bluestore"
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:    },
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "osd_id": 1,
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "type": "bluestore"
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:    },
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "osd_id": 0,
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:        "type": "bluestore"
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]:    }
Dec  3 02:29:43 compute-0 cool_ptolemy[468703]: }
Dec  3 02:29:43 compute-0 systemd[1]: libpod-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope: Deactivated successfully.
Dec  3 02:29:43 compute-0 podman[468688]: 2025-12-03 02:29:43.912156645 +0000 UTC m=+1.517371941 container died c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 02:29:43 compute-0 systemd[1]: libpod-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope: Consumed 1.214s CPU time.
Dec  3 02:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b647a81ee77db4b3c8e915068c2d00e15a2403025d6e7bfe9e9ac8099cd60035-merged.mount: Deactivated successfully.
Dec  3 02:29:43 compute-0 podman[468688]: 2025-12-03 02:29:43.988857249 +0000 UTC m=+1.594072545 container remove c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:29:44 compute-0 systemd[1]: libpod-conmon-c02e36bcb15f01cb6d1867c15727bb44f665c34f802a54cfc67c8a6fd9f64bc2.scope: Deactivated successfully.
Dec  3 02:29:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:29:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:29:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:29:44 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:29:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f5ff9db9-92c7-4702-9550-def9cc191582 does not exist
Dec  3 02:29:44 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev af21abab-3b1d-499e-a80a-4c8452597db0 does not exist
Dec  3 02:29:44 compute-0 nova_compute[351485]: 2025-12-03 02:29:44.148 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:29:44 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:29:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:46 compute-0 nova_compute[351485]: 2025-12-03 02:29:46.753 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:29:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4103974287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:29:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:29:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4103974287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:29:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:29:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3272 syncs, 3.60 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 751 writes, 2590 keys, 751 commit groups, 1.0 writes per commit group, ingest: 3.26 MB, 0.01 MB/s#012Interval WAL: 751 writes, 299 syncs, 2.51 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:29:49 compute-0 nova_compute[351485]: 2025-12-03 02:29:49.152 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:51 compute-0 nova_compute[351485]: 2025-12-03 02:29:51.756 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:54 compute-0 nova_compute[351485]: 2025-12-03 02:29:54.155 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:29:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9225 writes, 35K keys, 9225 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9225 writes, 2410 syncs, 3.83 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 311 writes, 768 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 311 writes, 149 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:29:56 compute-0 nova_compute[351485]: 2025-12-03 02:29:56.759 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 02:29:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:29:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:29:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:29:59 compute-0 nova_compute[351485]: 2025-12-03 02:29:59.158 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:29:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:29:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:29:59.664 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:29:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:29:59.665 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:29:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:29:59.666 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:29:59 compute-0 podman[158098]: time="2025-12-03T02:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:29:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  3 02:30:00 compute-0 podman[468800]: 2025-12-03 02:30:00.881435529 +0000 UTC m=+0.114163573 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:30:00 compute-0 podman[468799]: 2025-12-03 02:30:00.88251712 +0000 UTC m=+0.118119895 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:30:00 compute-0 podman[468798]: 2025-12-03 02:30:00.906430424 +0000 UTC m=+0.145736793 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:30:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:30:01 compute-0 openstack_network_exporter[368278]: ERROR   02:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:30:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:30:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:01 compute-0 nova_compute[351485]: 2025-12-03 02:30:01.762 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:04 compute-0 nova_compute[351485]: 2025-12-03 02:30:04.161 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:06 compute-0 nova_compute[351485]: 2025-12-03 02:30:06.767 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:08 compute-0 podman[468858]: 2025-12-03 02:30:08.869042579 +0000 UTC m=+0.116623422 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:30:09 compute-0 nova_compute[351485]: 2025-12-03 02:30:09.164 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:11 compute-0 nova_compute[351485]: 2025-12-03 02:30:11.770 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:11 compute-0 podman[468877]: 2025-12-03 02:30:11.860368891 +0000 UTC m=+0.094467456 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  3 02:30:11 compute-0 podman[468878]: 2025-12-03 02:30:11.878936585 +0000 UTC m=+0.105550939 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  3 02:30:11 compute-0 podman[468879]: 2025-12-03 02:30:11.906368249 +0000 UTC m=+0.125476011 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, config_id=edpm, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30)
Dec  3 02:30:11 compute-0 podman[468876]: 2025-12-03 02:30:11.910851846 +0000 UTC m=+0.153549084 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:30:11 compute-0 podman[468885]: 2025-12-03 02:30:11.912644897 +0000 UTC m=+0.123329612 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:30:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:14 compute-0 nova_compute[351485]: 2025-12-03 02:30:14.167 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:16 compute-0 nova_compute[351485]: 2025-12-03 02:30:16.774 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:19 compute-0 nova_compute[351485]: 2025-12-03 02:30:19.170 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:20 compute-0 nova_compute[351485]: 2025-12-03 02:30:20.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:21 compute-0 nova_compute[351485]: 2025-12-03 02:30:21.777 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:24 compute-0 nova_compute[351485]: 2025-12-03 02:30:24.175 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:26 compute-0 nova_compute[351485]: 2025-12-03 02:30:26.781 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:30:28
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms']
Dec  3 02:30:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:30:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:29 compute-0 nova_compute[351485]: 2025-12-03 02:30:29.178 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:30:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:29 compute-0 nova_compute[351485]: 2025-12-03 02:30:29.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:29 compute-0 podman[158098]: time="2025-12-03T02:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:30:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8665 "" "Go-http-client/1.1"
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.085 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.086 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.086 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.087 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.088 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:30:30 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:30:30 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597456042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.622 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.725 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.725 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.732 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:30:30 compute-0 nova_compute[351485]: 2025-12-03 02:30:30.733 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.367 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.368 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3474MB free_disk=59.897010803222656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.369 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.369 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:30:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:30:31 compute-0 openstack_network_exporter[368278]: ERROR   02:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:30:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.463 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.464 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.464 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.464 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.478 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.497 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.498 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.510 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.528 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:30:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.583 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:30:31 compute-0 nova_compute[351485]: 2025-12-03 02:30:31.786 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:31 compute-0 podman[469000]: 2025-12-03 02:30:31.86594991 +0000 UTC m=+0.099149219 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:30:31 compute-0 podman[468998]: 2025-12-03 02:30:31.869214382 +0000 UTC m=+0.108421790 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:30:31 compute-0 podman[468999]: 2025-12-03 02:30:31.878173635 +0000 UTC m=+0.124033361 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:30:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:30:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618414721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.232 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.257 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.258 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:30:32 compute-0 nova_compute[351485]: 2025-12-03 02:30:32.259 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:30:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:34 compute-0 nova_compute[351485]: 2025-12-03 02:30:34.182 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:36 compute-0 nova_compute[351485]: 2025-12-03 02:30:36.791 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:37 compute-0 nova_compute[351485]: 2025-12-03 02:30:37.258 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:37 compute-0 nova_compute[351485]: 2025-12-03 02:30:37.259 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:30:37 compute-0 nova_compute[351485]: 2025-12-03 02:30:37.260 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:30:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.004 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.005 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.005 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:30:38 compute-0 nova_compute[351485]: 2025-12-03 02:30:38.005 351492 DEBUG nova.objects.instance [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:30:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521471275314189 of space, bias 1.0, pg target 0.45644138259425665 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.185 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.589 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [{"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-2890ee5c-21c1-4e9d-9421-1a2df0f67f76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.607 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.608 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.610 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:39 compute-0 nova_compute[351485]: 2025-12-03 02:30:39.611 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:39 compute-0 podman[469074]: 2025-12-03 02:30:39.869380084 +0000 UTC m=+0.122582950 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:30:40 compute-0 nova_compute[351485]: 2025-12-03 02:30:40.923 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:40 compute-0 nova_compute[351485]: 2025-12-03 02:30:40.925 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:41 compute-0 nova_compute[351485]: 2025-12-03 02:30:41.794 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:42 compute-0 podman[469095]: 2025-12-03 02:30:42.871654604 +0000 UTC m=+0.095512056 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:30:42 compute-0 podman[469094]: 2025-12-03 02:30:42.871558642 +0000 UTC m=+0.103513233 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec  3 02:30:42 compute-0 podman[469102]: 2025-12-03 02:30:42.893726937 +0000 UTC m=+0.104681995 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:30:42 compute-0 podman[469096]: 2025-12-03 02:30:42.924894567 +0000 UTC m=+0.142598415 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 02:30:42 compute-0 podman[469093]: 2025-12-03 02:30:42.92749828 +0000 UTC m=+0.166173600 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:30:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:43 compute-0 nova_compute[351485]: 2025-12-03 02:30:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:30:43 compute-0 nova_compute[351485]: 2025-12-03 02:30:43.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:30:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:44 compute-0 nova_compute[351485]: 2025-12-03 02:30:44.189 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:30:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d39ec02b-dd99-4c3d-b332-cdfa66d71aed does not exist
Dec  3 02:30:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 31747156-abaf-4f70-872b-e94e937d8f37 does not exist
Dec  3 02:30:45 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 7034b79f-9a97-406e-b0c1-c60d0dd70949 does not exist
Dec  3 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:30:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:30:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:30:45 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:30:46 compute-0 nova_compute[351485]: 2025-12-03 02:30:46.797 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:46 compute-0 podman[469463]: 2025-12-03 02:30:46.957911936 +0000 UTC m=+0.082891110 container create 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:46.921851029 +0000 UTC m=+0.046830273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:30:47 compute-0 systemd[1]: Started libpod-conmon-8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f.scope.
Dec  3 02:30:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.094724447 +0000 UTC m=+0.219703621 container init 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.113017263 +0000 UTC m=+0.237996447 container start 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.119877917 +0000 UTC m=+0.244857071 container attach 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:30:47 compute-0 beautiful_nightingale[469477]: 167 167
Dec  3 02:30:47 compute-0 systemd[1]: libpod-8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f.scope: Deactivated successfully.
Dec  3 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.127256045 +0000 UTC m=+0.252235279 container died 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 02:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-28d92719bd5b18741c4ebccf17cbba998f02afda01b902580d048c6e9f812e91-merged.mount: Deactivated successfully.
Dec  3 02:30:47 compute-0 podman[469463]: 2025-12-03 02:30:47.202821258 +0000 UTC m=+0.327800412 container remove 8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 02:30:47 compute-0 systemd[1]: libpod-conmon-8271c5124cbca86a247240f492492b3793c7485b54d91b51ed82178b8219834f.scope: Deactivated successfully.
Dec  3 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.492150813 +0000 UTC m=+0.094096577 container create bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.454927252 +0000 UTC m=+0.056873086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:30:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:47 compute-0 systemd[1]: Started libpod-conmon-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope.
Dec  3 02:30:47 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.675910187 +0000 UTC m=+0.277855971 container init bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.693101682 +0000 UTC m=+0.295047456 container start bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:30:47 compute-0 podman[469503]: 2025-12-03 02:30:47.70011056 +0000 UTC m=+0.302056304 container attach bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 02:30:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:49 compute-0 blissful_satoshi[469517]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:30:49 compute-0 blissful_satoshi[469517]: --> relative data size: 1.0
Dec  3 02:30:49 compute-0 blissful_satoshi[469517]: --> All data devices are unavailable
Dec  3 02:30:49 compute-0 systemd[1]: libpod-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope: Deactivated successfully.
Dec  3 02:30:49 compute-0 podman[469503]: 2025-12-03 02:30:49.091457544 +0000 UTC m=+1.693403298 container died bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:30:49 compute-0 systemd[1]: libpod-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope: Consumed 1.332s CPU time.
Dec  3 02:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9929b07bb6e30f17a0eff9f1b18e69878bc2e6b0265e0a1173a7889f4495d635-merged.mount: Deactivated successfully.
Dec  3 02:30:49 compute-0 nova_compute[351485]: 2025-12-03 02:30:49.193 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:49 compute-0 podman[469503]: 2025-12-03 02:30:49.206900262 +0000 UTC m=+1.808846016 container remove bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:30:49 compute-0 systemd[1]: libpod-conmon-bbb8f2dc773a1b3b77106bd1f0fcb1ec2b519d0535ec369feee3069e7c7a97a9.scope: Deactivated successfully.
Dec  3 02:30:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.476679655 +0000 UTC m=+0.099409236 container create 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.436882802 +0000 UTC m=+0.059612453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:30:50 compute-0 systemd[1]: Started libpod-conmon-69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b.scope.
Dec  3 02:30:50 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.60972783 +0000 UTC m=+0.232457381 container init 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.626435031 +0000 UTC m=+0.249164612 container start 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.634040136 +0000 UTC m=+0.256769737 container attach 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:30:50 compute-0 recursing_bhabha[469712]: 167 167
Dec  3 02:30:50 compute-0 systemd[1]: libpod-69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b.scope: Deactivated successfully.
Dec  3 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.642088453 +0000 UTC m=+0.264818024 container died 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 02:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0be077ea56e511227f10cad255d9bd78b54229ba85dbb8d06b8683610c0101-merged.mount: Deactivated successfully.
Dec  3 02:30:50 compute-0 podman[469696]: 2025-12-03 02:30:50.715971838 +0000 UTC m=+0.338701379 container remove 69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 02:30:50 compute-0 systemd[1]: libpod-conmon-69468f3ff64d53b74a1cec774afd2625e93dbef01653301a5997ddaf74b0203b.scope: Deactivated successfully.
Dec  3 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.012875616 +0000 UTC m=+0.104719096 container create 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:50.969288666 +0000 UTC m=+0.061132186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:30:51 compute-0 systemd[1]: Started libpod-conmon-16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f.scope.
Dec  3 02:30:51 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.238248905 +0000 UTC m=+0.330092445 container init 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.273680885 +0000 UTC m=+0.365524355 container start 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:30:51 compute-0 podman[469734]: 2025-12-03 02:30:51.281319681 +0000 UTC m=+0.373163221 container attach 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 02:30:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:51 compute-0 nova_compute[351485]: 2025-12-03 02:30:51.801 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:52 compute-0 gracious_leakey[469750]: {
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:    "0": [
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:        {
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "devices": [
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "/dev/loop3"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            ],
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_name": "ceph_lv0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_size": "21470642176",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "name": "ceph_lv0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "tags": {
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cluster_name": "ceph",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.crush_device_class": "",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.encrypted": "0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osd_id": "0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.type": "block",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.vdo": "0"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            },
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "type": "block",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "vg_name": "ceph_vg0"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:        }
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:    ],
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:    "1": [
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:        {
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "devices": [
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "/dev/loop4"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            ],
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_name": "ceph_lv1",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_size": "21470642176",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "name": "ceph_lv1",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "tags": {
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cluster_name": "ceph",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.crush_device_class": "",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.encrypted": "0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osd_id": "1",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.type": "block",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.vdo": "0"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            },
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "type": "block",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "vg_name": "ceph_vg1"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:        }
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:    ],
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:    "2": [
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:        {
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "devices": [
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "/dev/loop5"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            ],
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_name": "ceph_lv2",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_size": "21470642176",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "name": "ceph_lv2",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "tags": {
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.cluster_name": "ceph",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.crush_device_class": "",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.encrypted": "0",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osd_id": "2",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.type": "block",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:                "ceph.vdo": "0"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            },
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "type": "block",
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:            "vg_name": "ceph_vg2"
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:        }
Dec  3 02:30:52 compute-0 gracious_leakey[469750]:    ]
Dec  3 02:30:52 compute-0 gracious_leakey[469750]: }
Dec  3 02:30:52 compute-0 systemd[1]: libpod-16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f.scope: Deactivated successfully.
Dec  3 02:30:52 compute-0 podman[469759]: 2025-12-03 02:30:52.200688296 +0000 UTC m=+0.051722861 container died 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-59cacf352a7df3aed24fa29855a745e8ce937ce83eabed3037ca808d9c089e8d-merged.mount: Deactivated successfully.
Dec  3 02:30:52 compute-0 podman[469759]: 2025-12-03 02:30:52.324900111 +0000 UTC m=+0.175934656 container remove 16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_leakey, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 02:30:52 compute-0 systemd[1]: libpod-conmon-16826113ae1e4ec2cb29fd616497b9b7a25fb3b806436e091ff1f6c990c7ed8f.scope: Deactivated successfully.
Dec  3 02:30:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.620007399 +0000 UTC m=+0.089415815 container create f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:30:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.590005462 +0000 UTC m=+0.059413878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:30:53 compute-0 systemd[1]: Started libpod-conmon-f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2.scope.
Dec  3 02:30:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.723707495 +0000 UTC m=+0.193115961 container init f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.732173854 +0000 UTC m=+0.201582230 container start f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:30:53 compute-0 strange_hypatia[469923]: 167 167
Dec  3 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.736727332 +0000 UTC m=+0.206135798 container attach f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:30:53 compute-0 systemd[1]: libpod-f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2.scope: Deactivated successfully.
Dec  3 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.739240203 +0000 UTC m=+0.208648599 container died f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e5f4c6474010f34d79cfa96859b2a720b490bcf20d007cad1690a2963672fe4-merged.mount: Deactivated successfully.
Dec  3 02:30:53 compute-0 podman[469910]: 2025-12-03 02:30:53.775497276 +0000 UTC m=+0.244905652 container remove f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  3 02:30:53 compute-0 systemd[1]: libpod-conmon-f9a13c6062315e81f4d4762ff811b81c57a56b65a0146df036eda19c09fc76b2.scope: Deactivated successfully.
Dec  3 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.0226054 +0000 UTC m=+0.070561262 container create 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:30:54 compute-0 systemd[1]: Started libpod-conmon-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope.
Dec  3 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.002111422 +0000 UTC m=+0.050067264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:30:54 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.177363417 +0000 UTC m=+0.225319269 container init 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec  3 02:30:54 compute-0 nova_compute[351485]: 2025-12-03 02:30:54.195 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.204860763 +0000 UTC m=+0.252816605 container start 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 02:30:54 compute-0 podman[469949]: 2025-12-03 02:30:54.211247223 +0000 UTC m=+0.259203145 container attach 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  3 02:30:55 compute-0 stoic_nobel[469966]: {
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "osd_id": 2,
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "type": "bluestore"
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:    },
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "osd_id": 1,
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "type": "bluestore"
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:    },
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "osd_id": 0,
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:        "type": "bluestore"
Dec  3 02:30:55 compute-0 stoic_nobel[469966]:    }
Dec  3 02:30:55 compute-0 stoic_nobel[469966]: }
Dec  3 02:30:55 compute-0 systemd[1]: libpod-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope: Deactivated successfully.
Dec  3 02:30:55 compute-0 systemd[1]: libpod-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope: Consumed 1.201s CPU time.
Dec  3 02:30:55 compute-0 podman[469949]: 2025-12-03 02:30:55.407729957 +0000 UTC m=+1.455685889 container died 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-670d577cf27dcb880013d4af82d4abb76b6d311fd77f127ce07ce9f76d4cfd41-merged.mount: Deactivated successfully.
Dec  3 02:30:55 compute-0 podman[469949]: 2025-12-03 02:30:55.508001167 +0000 UTC m=+1.555957009 container remove 8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nobel, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 02:30:55 compute-0 systemd[1]: libpod-conmon-8dbf491db27b4bcb5bccb1f31dcd8f40230ce45146cb77adcf84573ee83310f0.scope: Deactivated successfully.
Dec  3 02:30:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:30:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:30:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:30:55 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:30:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 8e322b93-3c41-470f-b258-fa87838e6e37 does not exist
Dec  3 02:30:55 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 46f85ef2-abac-45a6-aacb-62c78f17e54c does not exist
Dec  3 02:30:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:30:56 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:30:56 compute-0 nova_compute[351485]: 2025-12-03 02:30:56.806 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:30:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:30:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:30:59 compute-0 nova_compute[351485]: 2025-12-03 02:30:59.198 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:30:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:30:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:30:59.666 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:30:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:30:59.667 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:30:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:30:59.668 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:30:59 compute-0 podman[158098]: time="2025-12-03T02:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:30:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8675 "" "Go-http-client/1.1"
Dec  3 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:31:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:31:01 compute-0 openstack_network_exporter[368278]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:31:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:31:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:01 compute-0 nova_compute[351485]: 2025-12-03 02:31:01.809 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:02 compute-0 podman[470062]: 2025-12-03 02:31:02.851898128 +0000 UTC m=+0.100255140 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 02:31:02 compute-0 podman[470063]: 2025-12-03 02:31:02.924514227 +0000 UTC m=+0.171111919 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:31:02 compute-0 podman[470064]: 2025-12-03 02:31:02.9377155 +0000 UTC m=+0.172212311 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:31:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:04 compute-0 nova_compute[351485]: 2025-12-03 02:31:04.200 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:06 compute-0 nova_compute[351485]: 2025-12-03 02:31:06.813 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:09 compute-0 nova_compute[351485]: 2025-12-03 02:31:09.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:10 compute-0 podman[470121]: 2025-12-03 02:31:10.880122548 +0000 UTC m=+0.136245335 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Dec  3 02:31:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:11 compute-0 nova_compute[351485]: 2025-12-03 02:31:11.817 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:13 compute-0 podman[470143]: 2025-12-03 02:31:13.877697026 +0000 UTC m=+0.105181740 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:31:13 compute-0 podman[470142]: 2025-12-03 02:31:13.888442099 +0000 UTC m=+0.122320963 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git)
Dec  3 02:31:13 compute-0 podman[470150]: 2025-12-03 02:31:13.89841086 +0000 UTC m=+0.109117070 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:31:13 compute-0 podman[470141]: 2025-12-03 02:31:13.910979055 +0000 UTC m=+0.152517135 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec  3 02:31:13 compute-0 podman[470144]: 2025-12-03 02:31:13.924470816 +0000 UTC m=+0.145293292 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec  3 02:31:14 compute-0 nova_compute[351485]: 2025-12-03 02:31:14.207 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:16 compute-0 nova_compute[351485]: 2025-12-03 02:31:16.821 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:19 compute-0 nova_compute[351485]: 2025-12-03 02:31:19.211 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.515 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.516 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.516 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e56177d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.530 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'name': 'te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.536 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'name': 'te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr', 'flavor': {'id': '89219634-32e9-4cb5-896f-6fa0b1edfe13', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '8876482c-db67-48c0-9203-60685152fc9d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '63f39ac2863946b8b817457e689ff933', 'user_id': '8f61f44789494541b7c101b0fdab52f0', 'hostId': 'b9b5204cb6f419d1971089b3610cd52175ffd5baf1b6a5204f14f9c2', 'status': 'active', 'metadata': {'metering.server_group': '38bfb145-4971-41b6-9bc3-faf3c3931019'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.537 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-03T02:31:19.538161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.596 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/memory.usage volume: 42.40625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.647 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/memory.usage volume: 42.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-03T02:31:19.648994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.654 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.659 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.661 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.661 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-03T02:31:19.660900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.661 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-03T02:31:19.663301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.664 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.665 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.666 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-03T02:31:19.666073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.668 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.669 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-03T02:31:19.668654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-03T02:31:19.671218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.692 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.710 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.711 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.713 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-03T02:31:19.714489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.778 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 31074816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.779 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.848 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 31267328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.849 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.850 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.852 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-03T02:31:19.851336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.852 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-03T02:31:19.854717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.855 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 3352022930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.856 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.latency volume: 250801539 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.856 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 2988151233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.857 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.latency volume: 215162747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.858 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-03T02:31:19.859319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.860 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.860 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.861 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 1144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.861 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.862 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-03T02:31:19.863360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.864 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.864 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.866 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-03T02:31:19.866096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.870 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 73138176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.870 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-03T02:31:19.870000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.871 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.872 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.874 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 9097731540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.875 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.876 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 10465171027 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.877 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.878 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.879 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.880 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.881 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.881 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-03T02:31:19.874418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-03T02:31:19.879740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.884 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-03T02:31:19.884059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.884 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.885 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/cpu volume: 337520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-03T02:31:19.887194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.888 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/cpu volume: 340540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.890 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-03T02:31:19.890607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.892 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.893 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-03T02:31:19.892667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.894 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.895 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.896 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-03T02:31:19.895314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.896 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.897 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.900 14 DEBUG ceilometer.compute.pollsters [-] 4fb8fc07-d7b7-4be8-94da-155b040faf32/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 DEBUG ceilometer.compute.pollsters [-] 2890ee5c-21c1-4e9d-9421-1a2df0f67f76/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.901 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-03T02:31:19.898152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-03T02:31:19.899428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-03T02:31:19.900674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.903 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:31:19.904 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:31:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec  3 02:31:21 compute-0 nova_compute[351485]: 2025-12-03 02:31:21.824 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:22 compute-0 nova_compute[351485]: 2025-12-03 02:31:22.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec  3 02:31:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:24 compute-0 nova_compute[351485]: 2025-12-03 02:31:24.214 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec  3 02:31:26 compute-0 nova_compute[351485]: 2025-12-03 02:31:26.827 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:31:28
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'vms']
Dec  3 02:31:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:31:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:29 compute-0 nova_compute[351485]: 2025-12-03 02:31:29.216 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:31:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:31:29 compute-0 podman[158098]: time="2025-12-03T02:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43811 "" "Go-http-client/1.1"
Dec  3 02:31:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8679 "" "Go-http-client/1.1"
Dec  3 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:31:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:31:31 compute-0 openstack_network_exporter[368278]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:31:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.623 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:31:31 compute-0 nova_compute[351485]: 2025-12-03 02:31:31.831 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:31:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489720266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.168 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.287 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.288 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.298 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.299 351492 DEBUG nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.776 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.777 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3441MB free_disk=59.897010803222656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.777 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.778 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.904 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.905 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Instance 4fb8fc07-d7b7-4be8-94da-155b040faf32 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.905 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.906 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:31:32 compute-0 nova_compute[351485]: 2025-12-03 02:31:32.959 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:31:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:31:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2522631771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.462 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.477 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.499 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.503 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:31:33 compute-0 nova_compute[351485]: 2025-12-03 02:31:33.504 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:31:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec  3 02:31:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:33 compute-0 podman[470294]: 2025-12-03 02:31:33.869601135 +0000 UTC m=+0.100405854 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:31:33 compute-0 podman[470292]: 2025-12-03 02:31:33.877446277 +0000 UTC m=+0.122364234 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:31:33 compute-0 podman[470293]: 2025-12-03 02:31:33.880173024 +0000 UTC m=+0.129041283 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Dec  3 02:31:34 compute-0 nova_compute[351485]: 2025-12-03 02:31:34.218 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec  3 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.506 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.508 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.835 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.947 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.948 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquired lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 02:31:36 compute-0 nova_compute[351485]: 2025-12-03 02:31:36.949 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  3 02:31:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec  3 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.286 351492 DEBUG nova.network.neutron [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [{"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.310 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Releasing lock "refresh_cache-4fb8fc07-d7b7-4be8-94da-155b040faf32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.311 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  3 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.312 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.313 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:38 compute-0 nova_compute[351485]: 2025-12-03 02:31:38.314 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001521471275314189 of space, bias 1.0, pg target 0.45644138259425665 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:31:39 compute-0 nova_compute[351485]: 2025-12-03 02:31:39.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec  3 02:31:40 compute-0 nova_compute[351485]: 2025-12-03 02:31:40.378 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:41 compute-0 nova_compute[351485]: 2025-12-03 02:31:41.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:41 compute-0 nova_compute[351485]: 2025-12-03 02:31:41.839 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:41 compute-0 podman[470352]: 2025-12-03 02:31:41.880925477 +0000 UTC m=+0.135562786 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:31:43 compute-0 nova_compute[351485]: 2025-12-03 02:31:43.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:31:43 compute-0 nova_compute[351485]: 2025-12-03 02:31:43.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:31:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:44 compute-0 nova_compute[351485]: 2025-12-03 02:31:44.225 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:44 compute-0 podman[470370]: 2025-12-03 02:31:44.844262009 +0000 UTC m=+0.120795799 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Dec  3 02:31:44 compute-0 podman[470379]: 2025-12-03 02:31:44.861269969 +0000 UTC m=+0.117191977 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:31:44 compute-0 podman[470372]: 2025-12-03 02:31:44.872169106 +0000 UTC m=+0.112780422 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  3 02:31:44 compute-0 podman[470371]: 2025-12-03 02:31:44.874986406 +0000 UTC m=+0.144660313 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:31:44 compute-0 podman[470369]: 2025-12-03 02:31:44.889880446 +0000 UTC m=+0.173156376 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  3 02:31:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:46 compute-0 nova_compute[351485]: 2025-12-03 02:31:46.841 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:31:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103238257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:31:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:31:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103238257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:31:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.646571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108646609, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1364, "num_deletes": 256, "total_data_size": 2135109, "memory_usage": 2174240, "flush_reason": "Manual Compaction"}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108661852, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2103981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46298, "largest_seqno": 47661, "table_properties": {"data_size": 2097479, "index_size": 3702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13178, "raw_average_key_size": 19, "raw_value_size": 2084531, "raw_average_value_size": 3083, "num_data_blocks": 166, "num_entries": 676, "num_filter_entries": 676, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764728965, "oldest_key_time": 1764728965, "file_creation_time": 1764729108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 15350 microseconds, and 7119 cpu microseconds.
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.661918) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2103981 bytes OK
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.661939) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.664011) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.664029) EVENT_LOG_v1 {"time_micros": 1764729108664022, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.664049) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2129025, prev total WAL file size 2129025, number of live WAL files 2.
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.665503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373537' seq:72057594037927935, type:22 .. '6C6F676D0032303039' seq:0, type:0; will stop at (end)
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2054KB)], [110(7733KB)]
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108665610, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 10023536, "oldest_snapshot_seqno": -1}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6241 keys, 9914551 bytes, temperature: kUnknown
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108747301, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9914551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9872915, "index_size": 24950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 162576, "raw_average_key_size": 26, "raw_value_size": 9760126, "raw_average_value_size": 1563, "num_data_blocks": 998, "num_entries": 6241, "num_filter_entries": 6241, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.747736) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9914551 bytes
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.750501) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.6 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.6 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 6765, records dropped: 524 output_compression: NoCompression
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.750586) EVENT_LOG_v1 {"time_micros": 1764729108750520, "job": 66, "event": "compaction_finished", "compaction_time_micros": 81782, "compaction_time_cpu_micros": 30875, "output_level": 6, "num_output_files": 1, "total_output_size": 9914551, "num_input_records": 6765, "num_output_records": 6241, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108751423, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729108754738, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.665047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:31:48 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:31:48.755130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:31:49 compute-0 nova_compute[351485]: 2025-12-03 02:31:49.228 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:51 compute-0 nova_compute[351485]: 2025-12-03 02:31:51.844 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:54 compute-0 nova_compute[351485]: 2025-12-03 02:31:54.231 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:56 compute-0 nova_compute[351485]: 2025-12-03 02:31:56.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:31:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 194e852c-c153-4376-bf73-c2bd8d55dcd6 does not exist
Dec  3 02:31:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cf8996a8-98f5-4786-a74b-72f570af7c2b does not exist
Dec  3 02:31:57 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 75610ed0-e05f-49e4-a5fd-e88ad258c152 does not exist
Dec  3 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:31:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:31:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:31:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:31:57 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:31:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.506269371 +0000 UTC m=+0.058905103 container create 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 02:31:58 compute-0 systemd[1]: Started libpod-conmon-7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4.scope.
Dec  3 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.484347692 +0000 UTC m=+0.036983454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:31:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:31:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.653776973 +0000 UTC m=+0.206412805 container init 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.672229814 +0000 UTC m=+0.224865556 container start 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.677730009 +0000 UTC m=+0.230365831 container attach 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 02:31:58 compute-0 stoic_hermann[470756]: 167 167
Dec  3 02:31:58 compute-0 systemd[1]: libpod-7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4.scope: Deactivated successfully.
Dec  3 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.685781607 +0000 UTC m=+0.238417419 container died 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5fd00b9ad52f5456a440e38b6d3f3b4a98b37be57ab0578c78f0cc149abe3a1-merged.mount: Deactivated successfully.
Dec  3 02:31:58 compute-0 podman[470742]: 2025-12-03 02:31:58.771915697 +0000 UTC m=+0.324551469 container remove 7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:31:58 compute-0 systemd[1]: libpod-conmon-7fdc6517e69c7f47cf2fa0470825ac36e0d6c7071942ca674b3f75a2048f57d4.scope: Deactivated successfully.
Dec  3 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.115473112 +0000 UTC m=+0.092329496 container create b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.082868892 +0000 UTC m=+0.059725316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:31:59 compute-0 systemd[1]: Started libpod-conmon-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope.
Dec  3 02:31:59 compute-0 nova_compute[351485]: 2025-12-03 02:31:59.233 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:31:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.306967375 +0000 UTC m=+0.283823809 container init b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  3 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.359901329 +0000 UTC m=+0.336757703 container start b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:31:59 compute-0 podman[470779]: 2025-12-03 02:31:59.367489473 +0000 UTC m=+0.344345867 container attach b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:31:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:31:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:31:59.667 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:31:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:31:59.668 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:31:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:31:59.671 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:31:59 compute-0 podman[158098]: time="2025-12-03T02:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45514 "" "Go-http-client/1.1"
Dec  3 02:31:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9090 "" "Go-http-client/1.1"
Dec  3 02:32:00 compute-0 angry_mclean[470795]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:32:00 compute-0 angry_mclean[470795]: --> relative data size: 1.0
Dec  3 02:32:00 compute-0 angry_mclean[470795]: --> All data devices are unavailable
Dec  3 02:32:00 compute-0 systemd[1]: libpod-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope: Deactivated successfully.
Dec  3 02:32:00 compute-0 systemd[1]: libpod-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope: Consumed 1.329s CPU time.
Dec  3 02:32:00 compute-0 podman[470779]: 2025-12-03 02:32:00.759391283 +0000 UTC m=+1.736247667 container died b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 02:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba42076c0bd270941cb98c88532a03a09dc9a2bc884e362030deda31a2510a93-merged.mount: Deactivated successfully.
Dec  3 02:32:00 compute-0 podman[470779]: 2025-12-03 02:32:00.847138749 +0000 UTC m=+1.823995123 container remove b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mclean, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 02:32:00 compute-0 systemd[1]: libpod-conmon-b1085d6f6e7cfa0a07d033b8d0bb6611af57c311a7e9909aa54f4bc70936ca5b.scope: Deactivated successfully.
Dec  3 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:32:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:32:01 compute-0 openstack_network_exporter[368278]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:32:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:32:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:01 compute-0 nova_compute[351485]: 2025-12-03 02:32:01.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.112335783 +0000 UTC m=+0.101031822 container create 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.076648206 +0000 UTC m=+0.065344305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:32:02 compute-0 systemd[1]: Started libpod-conmon-09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397.scope.
Dec  3 02:32:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.290249784 +0000 UTC m=+0.278945863 container init 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.307376447 +0000 UTC m=+0.296072476 container start 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.314007754 +0000 UTC m=+0.302703833 container attach 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  3 02:32:02 compute-0 sad_hermann[470992]: 167 167
Dec  3 02:32:02 compute-0 systemd[1]: libpod-09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397.scope: Deactivated successfully.
Dec  3 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.321125785 +0000 UTC m=+0.309821854 container died 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 02:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee30bd922b8c21b4473dd98081935579241de0fd8b717abc0c450fc96b056d77-merged.mount: Deactivated successfully.
Dec  3 02:32:02 compute-0 podman[470976]: 2025-12-03 02:32:02.408000467 +0000 UTC m=+0.396696506 container remove 09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:32:02 compute-0 systemd[1]: libpod-conmon-09e75f6d362de42dd442fb92a9244a8edb9e29a15495a05f11a44c1f82564397.scope: Deactivated successfully.
Dec  3 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.698666438 +0000 UTC m=+0.092057328 container create ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  3 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.662102857 +0000 UTC m=+0.055493827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:32:02 compute-0 systemd[1]: Started libpod-conmon-ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906.scope.
Dec  3 02:32:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.870813686 +0000 UTC m=+0.264204606 container init ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.890178633 +0000 UTC m=+0.283569553 container start ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:32:02 compute-0 podman[471015]: 2025-12-03 02:32:02.896842871 +0000 UTC m=+0.290233791 container attach ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:32:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]: {
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:    "0": [
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:        {
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "devices": [
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "/dev/loop3"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            ],
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_name": "ceph_lv0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_size": "21470642176",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "name": "ceph_lv0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "tags": {
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cluster_name": "ceph",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.crush_device_class": "",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.encrypted": "0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osd_id": "0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.type": "block",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.vdo": "0"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            },
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "type": "block",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "vg_name": "ceph_vg0"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:        }
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:    ],
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:    "1": [
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:        {
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "devices": [
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "/dev/loop4"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            ],
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_name": "ceph_lv1",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_size": "21470642176",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "name": "ceph_lv1",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "tags": {
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cluster_name": "ceph",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.crush_device_class": "",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.encrypted": "0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osd_id": "1",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.type": "block",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.vdo": "0"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            },
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "type": "block",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "vg_name": "ceph_vg1"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:        }
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:    ],
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:    "2": [
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:        {
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "devices": [
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "/dev/loop5"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            ],
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_name": "ceph_lv2",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_size": "21470642176",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "name": "ceph_lv2",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "tags": {
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.cluster_name": "ceph",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.crush_device_class": "",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.encrypted": "0",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osd_id": "2",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.type": "block",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:                "ceph.vdo": "0"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            },
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "type": "block",
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:            "vg_name": "ceph_vg2"
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:        }
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]:    ]
Dec  3 02:32:03 compute-0 intelligent_darwin[471030]: }
Dec  3 02:32:03 compute-0 systemd[1]: libpod-ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906.scope: Deactivated successfully.
Dec  3 02:32:03 compute-0 podman[471015]: 2025-12-03 02:32:03.750742818 +0000 UTC m=+1.144133798 container died ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-059d47155ba265b3177b56d0de33912309472eb3c4cddf35777713d4f4b7d783-merged.mount: Deactivated successfully.
Dec  3 02:32:03 compute-0 podman[471015]: 2025-12-03 02:32:03.844229376 +0000 UTC m=+1.237620256 container remove ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  3 02:32:03 compute-0 systemd[1]: libpod-conmon-ef8a7b3858fcc6d56cb176ac279c134203d9b53a05fdd9f32a410571b06e8906.scope: Deactivated successfully.
Dec  3 02:32:04 compute-0 podman[471075]: 2025-12-03 02:32:04.146219318 +0000 UTC m=+0.116205260 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  3 02:32:04 compute-0 podman[471077]: 2025-12-03 02:32:04.153120623 +0000 UTC m=+0.119226646 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:32:04 compute-0 podman[471076]: 2025-12-03 02:32:04.154292426 +0000 UTC m=+0.122260971 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  3 02:32:04 compute-0 nova_compute[351485]: 2025-12-03 02:32:04.235 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.830070647 +0000 UTC m=+0.056119175 container create b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:32:04 compute-0 systemd[1]: Started libpod-conmon-b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def.scope.
Dec  3 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.810448993 +0000 UTC m=+0.036497521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:32:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.956735871 +0000 UTC m=+0.182784429 container init b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.972767353 +0000 UTC m=+0.198815891 container start b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.97937544 +0000 UTC m=+0.205423988 container attach b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:32:04 compute-0 recursing_nobel[471261]: 167 167
Dec  3 02:32:04 compute-0 systemd[1]: libpod-b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def.scope: Deactivated successfully.
Dec  3 02:32:04 compute-0 podman[471245]: 2025-12-03 02:32:04.985504833 +0000 UTC m=+0.211553341 container died b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 02:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4740b9722af89fb3f6ebee4710a70ea74a682a6fe81dc066ef75122332b02bb3-merged.mount: Deactivated successfully.
Dec  3 02:32:05 compute-0 podman[471245]: 2025-12-03 02:32:05.039487106 +0000 UTC m=+0.265535614 container remove b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:32:05 compute-0 systemd[1]: libpod-conmon-b5caf5969b3a6b766a864cf148ad12b29cda602da4fd48c43386283c5c159def.scope: Deactivated successfully.
Dec  3 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.345517092 +0000 UTC m=+0.100181558 container create 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.308004324 +0000 UTC m=+0.062668830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:32:05 compute-0 systemd[1]: Started libpod-conmon-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope.
Dec  3 02:32:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.53146807 +0000 UTC m=+0.286132566 container init 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.563828293 +0000 UTC m=+0.318492749 container start 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec  3 02:32:05 compute-0 podman[471283]: 2025-12-03 02:32:05.572407305 +0000 UTC m=+0.327071821 container attach 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:32:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]: {
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "osd_id": 2,
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "type": "bluestore"
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:    },
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "osd_id": 1,
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "type": "bluestore"
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:    },
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "osd_id": 0,
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:        "type": "bluestore"
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]:    }
Dec  3 02:32:06 compute-0 festive_ishizaka[471299]: }
Dec  3 02:32:06 compute-0 systemd[1]: libpod-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope: Deactivated successfully.
Dec  3 02:32:06 compute-0 podman[471283]: 2025-12-03 02:32:06.745126258 +0000 UTC m=+1.499790704 container died 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:32:06 compute-0 systemd[1]: libpod-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope: Consumed 1.183s CPU time.
Dec  3 02:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8553bb06f084064e86437e78fd19a25c13be2bcd4f74af8e762e80017b1720-merged.mount: Deactivated successfully.
Dec  3 02:32:06 compute-0 podman[471283]: 2025-12-03 02:32:06.828394868 +0000 UTC m=+1.583059334 container remove 782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:32:06 compute-0 systemd[1]: libpod-conmon-782eb842465bdd8430294ba289eaa1519087c62f9975bc35426ffd5f33e82b08.scope: Deactivated successfully.
Dec  3 02:32:06 compute-0 nova_compute[351485]: 2025-12-03 02:32:06.854 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:32:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:32:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:32:06 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:32:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1bc9dd6a-b2a6-4506-9eeb-e91519edfdd0 does not exist
Dec  3 02:32:06 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev cdc62077-a070-4d4c-b68d-5881f3072eb1 does not exist
Dec  3 02:32:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:32:06 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:32:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:32:09 compute-0 nova_compute[351485]: 2025-12-03 02:32:09.237 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.766 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.767 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.767 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.767 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.768 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.770 351492 INFO nova.compute.manager [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Terminating instance#033[00m
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.772 351492 DEBUG nova.compute.manager [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:32:10 compute-0 kernel: tapf36a9f58-d7 (unregistering): left promiscuous mode
Dec  3 02:32:10 compute-0 NetworkManager[48912]: <info>  [1764729130.9145] device (tapf36a9f58-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:32:10 compute-0 ovn_controller[89134]: 2025-12-03T02:32:10Z|00196|binding|INFO|Releasing lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a from this chassis (sb_readonly=0)
Dec  3 02:32:10 compute-0 ovn_controller[89134]: 2025-12-03T02:32:10Z|00197|binding|INFO|Setting lport f36a9f58-d7c9-4f05-942d-5a2c4cce705a down in Southbound
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.933 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:10 compute-0 ovn_controller[89134]: 2025-12-03T02:32:10Z|00198|binding|INFO|Removing iface tapf36a9f58-d7 ovn-installed in OVS
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.937 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:10 compute-0 nova_compute[351485]: 2025-12-03 02:32:10.962 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  3 02:32:11 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 7min 34.099s CPU time.
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.014 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ed:eb 10.100.0.239'], port_security=['fa:16:3e:dd:ed:eb 10.100.0.239'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.239/16', 'neutron:device_id': '2890ee5c-21c1-4e9d-9421-1a2df0f67f76', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=f36a9f58-d7c9-4f05-942d-5a2c4cce705a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:32:11 compute-0 systemd-machined[138558]: Machine qemu-15-instance-0000000e terminated.
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.018 288528 INFO neutron.agent.ovn.metadata.agent [-] Port f36a9f58-d7c9-4f05-942d-5a2c4cce705a in datapath a7615b73-b987-4b91-b12c-2d7488085657 unbound from our chassis#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.021 288528 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7615b73-b987-4b91-b12c-2d7488085657#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.050 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[685eadca-7a18-43ca-940a-f1542d87cd43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.102 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[6e090291-4a32-4bac-9acf-ba2525f40a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.107 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[43ca7411-32c4-4258-b43d-6190a82727e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.153 414771 DEBUG oslo.privsep.daemon [-] privsep: reply[cde2e097-5c40-4469-9274-f242c236a555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.183 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9784b0-16d1-4eb7-80ca-32085c8e2c50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7615b73-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:3e:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719210, 'reachable_time': 41270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 471403, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.215 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.216 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[6faf479f-e4a1-45dc-a15c-9d762d4eab0a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719227, 'tstamp': 719227}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 471405, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa7615b73-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719234, 'tstamp': 719234}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 471405, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.219 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.222 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.227 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.238 351492 INFO nova.virt.libvirt.driver [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Instance destroyed successfully.#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.239 351492 DEBUG nova.objects.instance [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'resources' on Instance uuid 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.244 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.245 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7615b73-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.246 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.246 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7615b73-b0, col_values=(('external_ids', {'iface-id': '50c454e1-4a4b-4aad-b47b-dafc7b079018'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:32:11 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:11.247 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.263 351492 DEBUG nova.virt.libvirt.vif [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:19:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-n4fdz722tgvn-jwe375iwm6yr',id=14,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:19:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-czfymphz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:19:13Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=2890ee5c-21c1-4e9d-9421-1a2df0f67f76,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.263 351492 DEBUG nova.network.os_vif_util [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "address": "fa:16:3e:dd:ed:eb", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.239", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf36a9f58-d7", "ovs_interfaceid": "f36a9f58-d7c9-4f05-942d-5a2c4cce705a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.264 351492 DEBUG nova.network.os_vif_util [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.265 351492 DEBUG os_vif [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.268 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.268 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf36a9f58-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.272 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.273 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.274 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:11 compute-0 nova_compute[351485]: 2025-12-03 02:32:11.280 351492 INFO os_vif [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ed:eb,bridge_name='br-int',has_traffic_filtering=True,id=f36a9f58-d7c9-4f05-942d-5a2c4cce705a,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf36a9f58-d7')#033[00m
Dec  3 02:32:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.275 351492 INFO nova.virt.libvirt.driver [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deleting instance files /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_del#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.276 351492 INFO nova.virt.libvirt.driver [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deletion of /var/lib/nova/instances/2890ee5c-21c1-4e9d-9421-1a2df0f67f76_del complete#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.348 351492 INFO nova.compute.manager [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 1.58 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.349 351492 DEBUG oslo.service.loopingcall [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.351 351492 DEBUG nova.compute.manager [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.351 351492 DEBUG nova.network.neutron [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.581 351492 DEBUG nova.compute.manager [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-unplugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.583 351492 DEBUG oslo_concurrency.lockutils [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.585 351492 DEBUG oslo_concurrency.lockutils [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.586 351492 DEBUG oslo_concurrency.lockutils [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.587 351492 DEBUG nova.compute.manager [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] No waiting events found dispatching network-vif-unplugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.588 351492 DEBUG nova.compute.manager [req-5785768a-0261-48a8-89c1-a5fafadc3303 req-7876cf6a-0f2b-4f12-8298-380b3055b49a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-unplugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:32:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:12.797 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:32:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:12.798 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:32:12 compute-0 nova_compute[351485]: 2025-12-03 02:32:12.798 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:12 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:12.802 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:32:12 compute-0 podman[471435]: 2025-12-03 02:32:12.887197306 +0000 UTC m=+0.132343856 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.264 351492 DEBUG nova.network.neutron [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.301 351492 INFO nova.compute.manager [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Took 0.95 seconds to deallocate network for instance.#033[00m
Dec  3 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.369 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.370 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:13 compute-0 nova_compute[351485]: 2025-12-03 02:32:13.479 351492 DEBUG oslo_concurrency.processutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:32:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:32:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:32:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936787387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.032 351492 DEBUG oslo_concurrency.processutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.042 351492 DEBUG nova.compute.provider_tree [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.059 351492 DEBUG nova.scheduler.client.report [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.079 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.101 351492 INFO nova.scheduler.client.report [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Deleted allocations for instance 2890ee5c-21c1-4e9d-9421-1a2df0f67f76#033[00m
Dec  3 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.155 351492 DEBUG oslo_concurrency.lockutils [None req-7686e067-c256-4b5b-8848-c27319400f31 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:14 compute-0 nova_compute[351485]: 2025-12-03 02:32:14.241 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.405 351492 DEBUG nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.405 351492 DEBUG oslo_concurrency.lockutils [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.406 351492 DEBUG oslo_concurrency.lockutils [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.406 351492 DEBUG oslo_concurrency.lockutils [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "2890ee5c-21c1-4e9d-9421-1a2df0f67f76-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.407 351492 DEBUG nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] No waiting events found dispatching network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.408 351492 WARNING nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received unexpected event network-vif-plugged-f36a9f58-d7c9-4f05-942d-5a2c4cce705a for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:32:15 compute-0 nova_compute[351485]: 2025-12-03 02:32:15.408 351492 DEBUG nova.compute.manager [req-708588a1-457b-482c-98ad-43d01ad73373 req-f3830f35-63c7-4eb9-858e-f6e3df7b0974 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Received event network-vif-deleted-f36a9f58-d7c9-4f05-942d-5a2c4cce705a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:32:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 177 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 25 op/s
Dec  3 02:32:15 compute-0 podman[471479]: 2025-12-03 02:32:15.876822119 +0000 UTC m=+0.107467973 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:32:15 compute-0 podman[471478]: 2025-12-03 02:32:15.884197178 +0000 UTC m=+0.121979523 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 02:32:15 compute-0 podman[471484]: 2025-12-03 02:32:15.89137861 +0000 UTC m=+0.109166231 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:32:15 compute-0 podman[471480]: 2025-12-03 02:32:15.898075999 +0000 UTC m=+0.113685639 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, distribution-scope=public)
Dec  3 02:32:15 compute-0 podman[471477]: 2025-12-03 02:32:15.919172895 +0000 UTC m=+0.160667435 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec  3 02:32:16 compute-0 nova_compute[351485]: 2025-12-03 02:32:16.272 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:32:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:32:19 compute-0 nova_compute[351485]: 2025-12-03 02:32:19.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.333 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.334 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.335 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.336 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.337 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.339 351492 INFO nova.compute.manager [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Terminating instance#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.341 351492 DEBUG nova.compute.manager [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 02:32:21 compute-0 kernel: tap94fdb5b9-66 (unregistering): left promiscuous mode
Dec  3 02:32:21 compute-0 NetworkManager[48912]: <info>  [1764729141.4886] device (tap94fdb5b9-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  3 02:32:21 compute-0 ovn_controller[89134]: 2025-12-03T02:32:21Z|00199|binding|INFO|Releasing lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c from this chassis (sb_readonly=0)
Dec  3 02:32:21 compute-0 ovn_controller[89134]: 2025-12-03T02:32:21Z|00200|binding|INFO|Setting lport 94fdb5b9-66bf-4e81-b411-064b08e4c71c down in Southbound
Dec  3 02:32:21 compute-0 ovn_controller[89134]: 2025-12-03T02:32:21Z|00201|binding|INFO|Removing iface tap94fdb5b9-66 ovn-installed in OVS
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.519 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.523 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.529 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0c:ae 10.100.1.46'], port_security=['fa:16:3e:3f:0c:ae 10.100.1.46'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.46/16', 'neutron:device_id': '4fb8fc07-d7b7-4be8-94da-155b040faf32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7615b73-b987-4b91-b12c-2d7488085657', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63f39ac2863946b8b817457e689ff933', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80ea8f15-ca6c-4a1b-8590-f50ba85e3add', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2f8982b-cbe8-4539-87ff-9ffeb5a93018, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>], logical_port=94fdb5b9-66bf-4e81-b411-064b08e4c71c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f652f2c4fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.532 288528 INFO neutron.agent.ovn.metadata.agent [-] Port 94fdb5b9-66bf-4e81-b411-064b08e4c71c in datapath a7615b73-b987-4b91-b12c-2d7488085657 unbound from our chassis#033[00m
Dec  3 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.534 288528 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7615b73-b987-4b91-b12c-2d7488085657, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  3 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.536 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[2f627e57-f33a-4b7f-9bdb-b15cf0219708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:21 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:21.538 288528 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 namespace which is not needed anymore#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.559 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:21 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  3 02:32:21 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 7min 2.865s CPU time.
Dec  3 02:32:21 compute-0 systemd-machined[138558]: Machine qemu-16-instance-0000000f terminated.
Dec  3 02:32:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.742 351492 DEBUG nova.compute.manager [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-unplugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.744 351492 DEBUG oslo_concurrency.lockutils [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.745 351492 DEBUG oslo_concurrency.lockutils [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.747 351492 DEBUG oslo_concurrency.lockutils [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.748 351492 DEBUG nova.compute.manager [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] No waiting events found dispatching network-vif-unplugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.749 351492 DEBUG nova.compute.manager [req-1099b501-90e6-454a-917d-e646c3e4e5da req-c5ee94ad-2f91-49be-bae1-b856083867b4 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-unplugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  3 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : haproxy version is 2.8.14-c23fe91
Dec  3 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [NOTICE]   (452993) : path to executable is /usr/sbin/haproxy
Dec  3 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [WARNING]  (452993) : Exiting Master process...
Dec  3 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [WARNING]  (452993) : Exiting Master process...
Dec  3 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [ALERT]    (452993) : Current worker (452995) exited with code 143 (Terminated)
Dec  3 02:32:21 compute-0 neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657[452989]: [WARNING]  (452993) : All workers exited. Exiting... (0)
Dec  3 02:32:21 compute-0 systemd[1]: libpod-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6.scope: Deactivated successfully.
Dec  3 02:32:21 compute-0 podman[471601]: 2025-12-03 02:32:21.785624367 +0000 UTC m=+0.089889338 container died c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.807 351492 INFO nova.virt.libvirt.driver [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Instance destroyed successfully.#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.808 351492 DEBUG nova.objects.instance [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lazy-loading 'resources' on Instance uuid 4fb8fc07-d7b7-4be8-94da-155b040faf32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.827 351492 DEBUG nova.virt.libvirt.vif [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-03T02:22:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-8071397-asg-3rvfkoaoyxm3-pdxc7a4qjxpu-j7dwudlie42q',id=15,image_ref='8876482c-db67-48c0-9203-60685152fc9d',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-03T02:22:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='38bfb145-4971-41b6-9bc3-faf3c3931019'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63f39ac2863946b8b817457e689ff933',ramdisk_id='',reservation_id='r-xvixyek3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8876482c-db67-48c0-9203-60685152fc9d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1008659157',owner_user_name='tempest-PrometheusGabbiTest-1008659157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-03T02:22:24Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8f61f44789494541b7c101b0fdab52f0',uuid=4fb8fc07-d7b7-4be8-94da-155b040faf32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.828 351492 DEBUG nova.network.os_vif_util [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converting VIF {"id": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "address": "fa:16:3e:3f:0c:ae", "network": {"id": "a7615b73-b987-4b91-b12c-2d7488085657", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.46", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63f39ac2863946b8b817457e689ff933", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94fdb5b9-66", "ovs_interfaceid": "94fdb5b9-66bf-4e81-b411-064b08e4c71c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.829 351492 DEBUG nova.network.os_vif_util [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.830 351492 DEBUG os_vif [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.834 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.836 351492 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94fdb5b9-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.846 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.850 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  3 02:32:21 compute-0 nova_compute[351485]: 2025-12-03 02:32:21.853 351492 INFO os_vif [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:0c:ae,bridge_name='br-int',has_traffic_filtering=True,id=94fdb5b9-66bf-4e81-b411-064b08e4c71c,network=Network(a7615b73-b987-4b91-b12c-2d7488085657),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94fdb5b9-66')#033[00m
Dec  3 02:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6-userdata-shm.mount: Deactivated successfully.
Dec  3 02:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-88013123d4a753ad03452e7c5ee2f44c7a3cff6bfcbc4c86988a478219f1d093-merged.mount: Deactivated successfully.
Dec  3 02:32:21 compute-0 podman[471601]: 2025-12-03 02:32:21.884493167 +0000 UTC m=+0.188758118 container cleanup c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:32:21 compute-0 systemd[1]: libpod-conmon-c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6.scope: Deactivated successfully.
Dec  3 02:32:22 compute-0 podman[471652]: 2025-12-03 02:32:22.033280266 +0000 UTC m=+0.098910333 container remove c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.053 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3fddfc-c223-465d-97cf-5da5197c7904]: (4, ('Wed Dec  3 02:32:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 (c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6)\nc800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6\nWed Dec  3 02:32:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 (c800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6)\nc800fdc7996a5ce9fede2c3aba64d14e29e89828606aa9d2a7ffa7487fe7cad6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.057 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[72f6062c-d682-4ca5-8682-8eb4f28d00e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.059 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7615b73-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.062 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:22 compute-0 kernel: tapa7615b73-b0: left promiscuous mode
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.071 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9edeff-db22-4777-93ff-fe493ede465d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.097 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[87e4b66a-616e-4740-9128-31588855637d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.099 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2898e2-5ced-4ff3-a22a-275447eb1b3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.125 414755 DEBUG oslo.privsep.daemon [-] privsep: reply[32c3dc7c-6b35-4b4f-9828-f85930eba1d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719201, 'reachable_time': 35807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 471669, 'error': None, 'target': 'ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.129 288639 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7615b73-b987-4b91-b12c-2d7488085657 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  3 02:32:22 compute-0 systemd[1]: run-netns-ovnmeta\x2da7615b73\x2db987\x2d4b91\x2db12c\x2d2d7488085657.mount: Deactivated successfully.
Dec  3 02:32:22 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:22.129 288639 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ee25aa-3b4c-4c44-8f97-d3c9286210b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.740 351492 INFO nova.virt.libvirt.driver [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deleting instance files /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32_del#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.742 351492 INFO nova.virt.libvirt.driver [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deletion of /var/lib/nova/instances/4fb8fc07-d7b7-4be8-94da-155b040faf32_del complete#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.806 351492 INFO nova.compute.manager [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 1.46 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.807 351492 DEBUG oslo.service.loopingcall [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.807 351492 DEBUG nova.compute.manager [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 02:32:22 compute-0 nova_compute[351485]: 2025-12-03 02:32:22.808 351492 DEBUG nova.network.neutron [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.562 351492 DEBUG nova.network.neutron [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.581 351492 INFO nova.compute.manager [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Took 0.77 seconds to deallocate network for instance.#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.619 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.620 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:32:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.677 351492 DEBUG oslo_concurrency.processutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.822 351492 DEBUG nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.823 351492 DEBUG oslo_concurrency.lockutils [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Acquiring lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.824 351492 DEBUG oslo_concurrency.lockutils [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.826 351492 DEBUG oslo_concurrency.lockutils [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.826 351492 DEBUG nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] No waiting events found dispatching network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.827 351492 WARNING nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received unexpected event network-vif-plugged-94fdb5b9-66bf-4e81-b411-064b08e4c71c for instance with vm_state deleted and task_state None.#033[00m
Dec  3 02:32:23 compute-0 nova_compute[351485]: 2025-12-03 02:32:23.828 351492 DEBUG nova.compute.manager [req-6350c315-c8b2-4e61-adc2-ed529b12ee85 req-da679287-3708-452f-b3b0-79b4e2d0856a 9a5a0781c3c44ccda531698170cc8adc 418cfe15e6b54be19149c6b03ab0d5b7 - - default default] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Received event network-vif-deleted-94fdb5b9-66bf-4e81-b411-064b08e4c71c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  3 02:32:24 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:32:24 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384299428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.152 351492 DEBUG oslo_concurrency.processutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.167 351492 DEBUG nova.compute.provider_tree [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.187 351492 DEBUG nova.scheduler.client.report [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.224 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.249 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.269 351492 INFO nova.scheduler.client.report [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Deleted allocations for instance 4fb8fc07-d7b7-4be8-94da-155b040faf32#033[00m
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.361 351492 DEBUG oslo_concurrency.lockutils [None req-3afd028c-54f9-4c23-bc87-389d4bed2dd0 8f61f44789494541b7c101b0fdab52f0 63f39ac2863946b8b817457e689ff933 - - default default] Lock "4fb8fc07-d7b7-4be8-94da-155b040faf32" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:24 compute-0 nova_compute[351485]: 2025-12-03 02:32:24.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 115 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 49 op/s
Dec  3 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.232 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764729131.2301612, 2890ee5c-21c1-4e9d-9421-1a2df0f67f76 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.233 351492 INFO nova.compute.manager [-] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.273 351492 DEBUG nova.compute.manager [None req-2d9d8389-e8fb-46c7-830d-26333c38771f - - - - - -] [instance: 2890ee5c-21c1-4e9d-9421-1a2df0f67f76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:32:26 compute-0 nova_compute[351485]: 2025-12-03 02:32:26.840 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:32:28
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Dec  3 02:32:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:32:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:32:29 compute-0 nova_compute[351485]: 2025-12-03 02:32:29.253 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:32:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:32:29 compute-0 podman[158098]: time="2025-12-03T02:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:32:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec  3 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:32:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:32:31 compute-0 openstack_network_exporter[368278]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:32:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.604 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.605 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.606 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.607 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.608 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:32:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:32:31 compute-0 nova_compute[351485]: 2025-12-03 02:32:31.844 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:32 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:32:32 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402290756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.144 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.790 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.792 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3965MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.793 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:32 compute-0 nova_compute[351485]: 2025-12-03 02:32:32.793 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.123 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.124 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.200 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:32:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 77 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  3 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  3 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  3 02:32:33 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  3 02:32:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:32:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173988335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.819 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.839 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.863 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.883 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:32:33 compute-0 nova_compute[351485]: 2025-12-03 02:32:33.884 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:34 compute-0 nova_compute[351485]: 2025-12-03 02:32:34.257 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:34 compute-0 podman[471742]: 2025-12-03 02:32:34.871851515 +0000 UTC m=+0.108239226 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:32:34 compute-0 podman[471740]: 2025-12-03 02:32:34.90008141 +0000 UTC m=+0.140942337 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 02:32:34 compute-0 podman[471741]: 2025-12-03 02:32:34.900252605 +0000 UTC m=+0.131657225 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  3 02:32:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 65 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.4 KiB/s wr, 10 op/s
Dec  3 02:32:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  3 02:32:35 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  3 02:32:35 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  3 02:32:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  3 02:32:36 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  3 02:32:36 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.801 351492 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764729141.7989619, 4fb8fc07-d7b7-4be8-94da-155b040faf32 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.802 351492 INFO nova.compute.manager [-] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] VM Stopped (Lifecycle Event)#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.831 351492 DEBUG nova.compute.manager [None req-923b733c-bf8c-43d5-a698-d45e8fcf898d - - - - - -] [instance: 4fb8fc07-d7b7-4be8-94da-155b040faf32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.848 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.885 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.885 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.885 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.917 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:32:36 compute-0 nova_compute[351485]: 2025-12-03 02:32:36.917 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:37 compute-0 nova_compute[351485]: 2025-12-03 02:32:37.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:37 compute-0 nova_compute[351485]: 2025-12-03 02:32:37.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 69 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 761 KiB/s wr, 65 op/s
Dec  3 02:32:38 compute-0 nova_compute[351485]: 2025-12-03 02:32:38.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:38 compute-0 nova_compute[351485]: 2025-12-03 02:32:38.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:32:38 compute-0 nova_compute[351485]: 2025-12-03 02:32:38.615 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:32:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0012519738555707494 of space, bias 1.0, pg target 0.3755921566712248 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:32:39 compute-0 nova_compute[351485]: 2025-12-03 02:32:39.261 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:39 compute-0 nova_compute[351485]: 2025-12-03 02:32:39.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:39 compute-0 nova_compute[351485]: 2025-12-03 02:32:39.609 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 61 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 763 KiB/s wr, 75 op/s
Dec  3 02:32:40 compute-0 nova_compute[351485]: 2025-12-03 02:32:40.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:40 compute-0 nova_compute[351485]: 2025-12-03 02:32:40.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:32:40 compute-0 nova_compute[351485]: 2025-12-03 02:32:40.626 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:41 compute-0 nova_compute[351485]: 2025-12-03 02:32:41.603 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 2.6 MiB/s wr, 87 op/s
Dec  3 02:32:41 compute-0 nova_compute[351485]: 2025-12-03 02:32:41.851 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:43 compute-0 nova_compute[351485]: 2025-12-03 02:32:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:32:43 compute-0 nova_compute[351485]: 2025-12-03 02:32:43.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:32:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.6 MiB/s wr, 82 op/s
Dec  3 02:32:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:32:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  3 02:32:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  3 02:32:43 compute-0 ceph-mon[192821]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  3 02:32:43 compute-0 podman[471800]: 2025-12-03 02:32:43.866239564 +0000 UTC m=+0.119909175 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:32:44 compute-0 nova_compute[351485]: 2025-12-03 02:32:44.264 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec  3 02:32:46 compute-0 nova_compute[351485]: 2025-12-03 02:32:46.854 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:46 compute-0 podman[471822]: 2025-12-03 02:32:46.867911071 +0000 UTC m=+0.101830615 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, name=ubi9, version=9.4, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, release=1214.1726694543, vendor=Red Hat, Inc.)
Dec  3 02:32:46 compute-0 podman[471821]: 2025-12-03 02:32:46.880744333 +0000 UTC m=+0.119331468 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:32:46 compute-0 podman[471820]: 2025-12-03 02:32:46.892928417 +0000 UTC m=+0.136790571 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  3 02:32:46 compute-0 podman[471823]: 2025-12-03 02:32:46.901418676 +0000 UTC m=+0.125909564 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  3 02:32:46 compute-0 podman[471819]: 2025-12-03 02:32:46.901619172 +0000 UTC m=+0.148199343 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:32:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:32:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946065896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:32:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:32:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946065896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:32:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 MiB/s wr, 29 op/s
Dec  3 02:32:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:32:49 compute-0 nova_compute[351485]: 2025-12-03 02:32:49.269 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Dec  3 02:32:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:51 compute-0 nova_compute[351485]: 2025-12-03 02:32:51.857 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:32:54 compute-0 nova_compute[351485]: 2025-12-03 02:32:54.273 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:56 compute-0 nova_compute[351485]: 2025-12-03 02:32:56.860 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:32:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:32:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:32:59 compute-0 nova_compute[351485]: 2025-12-03 02:32:59.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:32:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:59.668 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:59.669 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:32:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:32:59.669 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:32:59 compute-0 podman[158098]: time="2025-12-03T02:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:32:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8199 "" "Go-http-client/1.1"
Dec  3 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:33:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:33:01 compute-0 openstack_network_exporter[368278]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:33:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:33:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:01 compute-0 nova_compute[351485]: 2025-12-03 02:33:01.863 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:04 compute-0 nova_compute[351485]: 2025-12-03 02:33:04.279 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:05 compute-0 podman[471920]: 2025-12-03 02:33:05.840972275 +0000 UTC m=+0.089381544 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:33:05 compute-0 podman[471922]: 2025-12-03 02:33:05.846036408 +0000 UTC m=+0.087088529 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:33:05 compute-0 podman[471921]: 2025-12-03 02:33:05.868491581 +0000 UTC m=+0.111764055 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:33:06 compute-0 nova_compute[351485]: 2025-12-03 02:33:06.866 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:33:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aba42754-2fa6-4d55-9898-57826b6f4e86 does not exist
Dec  3 02:33:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 65241505-fc7e-4c43-9303-7e9296cedbca does not exist
Dec  3 02:33:08 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 13cfc2a0-8921-4f1d-8e1b-624d71ce29a2 does not exist
Dec  3 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:33:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:33:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:33:08 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:33:09 compute-0 nova_compute[351485]: 2025-12-03 02:33:09.281 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.66329551 +0000 UTC m=+0.085795472 container create 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.628259641 +0000 UTC m=+0.050759663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:33:09 compute-0 systemd[1]: Started libpod-conmon-621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8.scope.
Dec  3 02:33:09 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.822239525 +0000 UTC m=+0.244739567 container init 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.840157691 +0000 UTC m=+0.262657673 container start 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.847825227 +0000 UTC m=+0.270325199 container attach 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 02:33:09 compute-0 sleepy_benz[472265]: 167 167
Dec  3 02:33:09 compute-0 systemd[1]: libpod-621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8.scope: Deactivated successfully.
Dec  3 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.853863328 +0000 UTC m=+0.276363300 container died 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-000cf0f57ff64c9ebb7553f14f9b3182d5039588ba3083513c6ac578b604ca90-merged.mount: Deactivated successfully.
Dec  3 02:33:09 compute-0 podman[472249]: 2025-12-03 02:33:09.945182955 +0000 UTC m=+0.367682917 container remove 621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 02:33:09 compute-0 systemd[1]: libpod-conmon-621d9cb9711f0235ec0863973ca3be9d94c67f13a54d999666cae78b3d9662b8.scope: Deactivated successfully.
Dec  3 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.243313778 +0000 UTC m=+0.095101335 container create 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.208369502 +0000 UTC m=+0.060157119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:33:10 compute-0 systemd[1]: Started libpod-conmon-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope.
Dec  3 02:33:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.462339099 +0000 UTC m=+0.314126676 container init 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.479075181 +0000 UTC m=+0.330862728 container start 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:33:10 compute-0 podman[472287]: 2025-12-03 02:33:10.48469176 +0000 UTC m=+0.336479337 container attach 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:33:10 compute-0 nova_compute[351485]: 2025-12-03 02:33:10.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:11 compute-0 great_ellis[472303]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:33:11 compute-0 great_ellis[472303]: --> relative data size: 1.0
Dec  3 02:33:11 compute-0 great_ellis[472303]: --> All data devices are unavailable
Dec  3 02:33:11 compute-0 systemd[1]: libpod-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope: Deactivated successfully.
Dec  3 02:33:11 compute-0 podman[472287]: 2025-12-03 02:33:11.783965044 +0000 UTC m=+1.635752621 container died 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:33:11 compute-0 systemd[1]: libpod-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope: Consumed 1.240s CPU time.
Dec  3 02:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df890217a81dcdbdf6abffe3d217def7a7fa232b08f3568d43a17e2b31dcfa8-merged.mount: Deactivated successfully.
Dec  3 02:33:11 compute-0 nova_compute[351485]: 2025-12-03 02:33:11.869 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:11 compute-0 podman[472287]: 2025-12-03 02:33:11.888488584 +0000 UTC m=+1.740276131 container remove 276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.891504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729191891579, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 963, "num_deletes": 253, "total_data_size": 1315641, "memory_usage": 1343312, "flush_reason": "Manual Compaction"}
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729191902479, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1302551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47662, "largest_seqno": 48624, "table_properties": {"data_size": 1297713, "index_size": 2426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10645, "raw_average_key_size": 19, "raw_value_size": 1287907, "raw_average_value_size": 2416, "num_data_blocks": 108, "num_entries": 533, "num_filter_entries": 533, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729108, "oldest_key_time": 1764729108, "file_creation_time": 1764729191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 11074 microseconds, and 5154 cpu microseconds.
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.902574) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1302551 bytes OK
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.902595) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.905718) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.905735) EVENT_LOG_v1 {"time_micros": 1764729191905730, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.905753) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1310999, prev total WAL file size 1310999, number of live WAL files 2.
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.907018) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1272KB)], [113(9682KB)]
Dec  3 02:33:11 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729191907132, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11217102, "oldest_snapshot_seqno": -1}
Dec  3 02:33:11 compute-0 systemd[1]: libpod-conmon-276c195c51576f5f26c8a7e462af4e886bfe416c603a2c566c37faeb60b5fbce.scope: Deactivated successfully.
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6253 keys, 9455418 bytes, temperature: kUnknown
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729192002722, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9455418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9413965, "index_size": 24703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 163545, "raw_average_key_size": 26, "raw_value_size": 9301228, "raw_average_value_size": 1487, "num_data_blocks": 981, "num_entries": 6253, "num_filter_entries": 6253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.003034) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9455418 bytes
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.005877) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.2 rd, 98.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(15.9) write-amplify(7.3) OK, records in: 6774, records dropped: 521 output_compression: NoCompression
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.005905) EVENT_LOG_v1 {"time_micros": 1764729192005892, "job": 68, "event": "compaction_finished", "compaction_time_micros": 95675, "compaction_time_cpu_micros": 44462, "output_level": 6, "num_output_files": 1, "total_output_size": 9455418, "num_input_records": 6774, "num_output_records": 6253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729192006495, "job": 68, "event": "table_file_deletion", "file_number": 115}
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729192010339, "job": 68, "event": "table_file_deletion", "file_number": 113}
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:11.906473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:33:12 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:33:12.010691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.043708424 +0000 UTC m=+0.086431180 container create 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.009025095 +0000 UTC m=+0.051747911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:33:13 compute-0 systemd[1]: Started libpod-conmon-9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2.scope.
Dec  3 02:33:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.204505082 +0000 UTC m=+0.247227898 container init 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.220870694 +0000 UTC m=+0.263593440 container start 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.227609084 +0000 UTC m=+0.270331850 container attach 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:33:13 compute-0 priceless_herschel[472498]: 167 167
Dec  3 02:33:13 compute-0 systemd[1]: libpod-9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2.scope: Deactivated successfully.
Dec  3 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.233388047 +0000 UTC m=+0.276110853 container died 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 02:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d2a7a77478c0b1774309140c42bfc34f031a216b95798c91a0a363b75d3f1a8-merged.mount: Deactivated successfully.
Dec  3 02:33:13 compute-0 podman[472483]: 2025-12-03 02:33:13.308395264 +0000 UTC m=+0.351117990 container remove 9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:33:13 compute-0 systemd[1]: libpod-conmon-9820aa6ecaa819c941523e94380abb620e3f12423f4cb1d5e0f3abdc9072bfd2.scope: Deactivated successfully.
Dec  3 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.615692295 +0000 UTC m=+0.092582853 container create e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:33:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.577192639 +0000 UTC m=+0.054083237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:33:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:13 compute-0 systemd[1]: Started libpod-conmon-e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63.scope.
Dec  3 02:33:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.812462538 +0000 UTC m=+0.289353086 container init e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.832826013 +0000 UTC m=+0.309716551 container start e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:33:13 compute-0 podman[472521]: 2025-12-03 02:33:13.840341885 +0000 UTC m=+0.317232413 container attach e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:33:14 compute-0 nova_compute[351485]: 2025-12-03 02:33:14.284 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]: {
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:    "0": [
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:        {
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "devices": [
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "/dev/loop3"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            ],
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_name": "ceph_lv0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_size": "21470642176",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "name": "ceph_lv0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "tags": {
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cluster_name": "ceph",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.crush_device_class": "",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.encrypted": "0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osd_id": "0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.type": "block",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.vdo": "0"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            },
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "type": "block",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "vg_name": "ceph_vg0"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:        }
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:    ],
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:    "1": [
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:        {
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "devices": [
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "/dev/loop4"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            ],
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_name": "ceph_lv1",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_size": "21470642176",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "name": "ceph_lv1",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "tags": {
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cluster_name": "ceph",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.crush_device_class": "",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.encrypted": "0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osd_id": "1",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.type": "block",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.vdo": "0"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            },
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "type": "block",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "vg_name": "ceph_vg1"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:        }
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:    ],
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:    "2": [
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:        {
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "devices": [
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "/dev/loop5"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            ],
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_name": "ceph_lv2",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_size": "21470642176",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "name": "ceph_lv2",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "tags": {
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.cluster_name": "ceph",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.crush_device_class": "",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.encrypted": "0",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osd_id": "2",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.type": "block",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:                "ceph.vdo": "0"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            },
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "type": "block",
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:            "vg_name": "ceph_vg2"
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:        }
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]:    ]
Dec  3 02:33:14 compute-0 unruffled_brattain[472536]: }
Dec  3 02:33:14 compute-0 systemd[1]: libpod-e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63.scope: Deactivated successfully.
Dec  3 02:33:14 compute-0 podman[472521]: 2025-12-03 02:33:14.717281381 +0000 UTC m=+1.194171939 container died e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-afda988bbab7f86a53aeed48018e2bc747b129ca64566114b7d5e2e39289eea3-merged.mount: Deactivated successfully.
Dec  3 02:33:14 compute-0 podman[472521]: 2025-12-03 02:33:14.824286731 +0000 UTC m=+1.301177249 container remove e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:33:14 compute-0 systemd[1]: libpod-conmon-e7437255c6efdf5cbaf72eb8612858c3b6679cc0cf10f34411dc8d80e0673e63.scope: Deactivated successfully.
Dec  3 02:33:14 compute-0 podman[472545]: 2025-12-03 02:33:14.88059602 +0000 UTC m=+0.165649266 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:33:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:15 compute-0 podman[472714]: 2025-12-03 02:33:15.914752364 +0000 UTC m=+0.087964394 container create 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:33:15 compute-0 podman[472714]: 2025-12-03 02:33:15.88237255 +0000 UTC m=+0.055584620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:33:15 compute-0 systemd[1]: Started libpod-conmon-5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145.scope.
Dec  3 02:33:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.059113808 +0000 UTC m=+0.232325878 container init 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.078015861 +0000 UTC m=+0.251227881 container start 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.084846134 +0000 UTC m=+0.258058194 container attach 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:33:16 compute-0 relaxed_cori[472728]: 167 167
Dec  3 02:33:16 compute-0 systemd[1]: libpod-5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145.scope: Deactivated successfully.
Dec  3 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.092912922 +0000 UTC m=+0.266124952 container died 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f531e3fafb8265f1e65607d04b278b1f9ed3ee759fb64a86bcd8f9fc24ecb9c-merged.mount: Deactivated successfully.
Dec  3 02:33:16 compute-0 podman[472714]: 2025-12-03 02:33:16.179016331 +0000 UTC m=+0.352228351 container remove 5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:33:16 compute-0 systemd[1]: libpod-conmon-5b991ef81930a47eb858e18f29717a4b7243580931c6865e07e943154ecd2145.scope: Deactivated successfully.
Dec  3 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.495284386 +0000 UTC m=+0.121628863 container create 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.438873835 +0000 UTC m=+0.065218372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:33:16 compute-0 systemd[1]: Started libpod-conmon-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope.
Dec  3 02:33:16 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.70909159 +0000 UTC m=+0.335436087 container init 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  3 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.728490427 +0000 UTC m=+0.354834914 container start 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:33:16 compute-0 podman[472752]: 2025-12-03 02:33:16.736693309 +0000 UTC m=+0.363037826 container attach 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 02:33:16 compute-0 nova_compute[351485]: 2025-12-03 02:33:16.874 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:17 compute-0 ovn_controller[89134]: 2025-12-03T02:33:17Z|00202|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Dec  3 02:33:17 compute-0 podman[472794]: 2025-12-03 02:33:17.872335646 +0000 UTC m=+0.096846733 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  3 02:33:17 compute-0 podman[472793]: 2025-12-03 02:33:17.877690437 +0000 UTC m=+0.106190357 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release-0.7.12=)
Dec  3 02:33:17 compute-0 podman[472791]: 2025-12-03 02:33:17.880120666 +0000 UTC m=+0.119844142 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]: {
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "osd_id": 2,
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "type": "bluestore"
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:    },
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "osd_id": 1,
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "type": "bluestore"
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:    },
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "osd_id": 0,
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:        "type": "bluestore"
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]:    }
Dec  3 02:33:17 compute-0 nostalgic_shtern[472768]: }
Dec  3 02:33:17 compute-0 podman[472792]: 2025-12-03 02:33:17.898673239 +0000 UTC m=+0.137345596 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:33:17 compute-0 systemd[1]: libpod-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope: Deactivated successfully.
Dec  3 02:33:17 compute-0 systemd[1]: libpod-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope: Consumed 1.188s CPU time.
Dec  3 02:33:17 compute-0 podman[472752]: 2025-12-03 02:33:17.924409815 +0000 UTC m=+1.550754262 container died 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:33:17 compute-0 podman[472789]: 2025-12-03 02:33:17.936121886 +0000 UTC m=+0.177139899 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a53037716e1e6efde6dbd19c251e2c115c00e5fea6eb8d3f9364d68541151920-merged.mount: Deactivated successfully.
Dec  3 02:33:17 compute-0 podman[472752]: 2025-12-03 02:33:17.990572492 +0000 UTC m=+1.616916929 container remove 89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shtern, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:33:18 compute-0 systemd[1]: libpod-conmon-89ab56c625ecde76bf113ca98e17db29aa91aaf5dbffbae42f3b848933a87f33.scope: Deactivated successfully.
Dec  3 02:33:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:33:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:33:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:33:18 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:33:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev fad24750-0aab-4270-83d8-4edff869152a does not exist
Dec  3 02:33:18 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 50e1f2ae-6872-4a64-a957-5b452d07e7eb does not exist
Dec  3 02:33:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:33:19 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:33:19 compute-0 nova_compute[351485]: 2025-12-03 02:33:19.289 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.516 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.517 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.517 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.531 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.531 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e5615730>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:33:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:33:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:21 compute-0 nova_compute[351485]: 2025-12-03 02:33:21.879 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:24 compute-0 nova_compute[351485]: 2025-12-03 02:33:24.292 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:24 compute-0 nova_compute[351485]: 2025-12-03 02:33:24.607 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:26 compute-0 nova_compute[351485]: 2025-12-03 02:33:26.882 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:33:28
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'backups', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta']
Dec  3 02:33:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:33:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:29 compute-0 nova_compute[351485]: 2025-12-03 02:33:29.296 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:33:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:29 compute-0 podman[158098]: time="2025-12-03T02:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:33:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8202 "" "Go-http-client/1.1"
Dec  3 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:33:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:33:31 compute-0 openstack_network_exporter[368278]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:33:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:33:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:31 compute-0 nova_compute[351485]: 2025-12-03 02:33:31.886 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.848 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.850 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.850 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:33:32 compute-0 nova_compute[351485]: 2025-12-03 02:33:32.851 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:33:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:33:33 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1581721207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:33:33 compute-0 nova_compute[351485]: 2025-12-03 02:33:33.385 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:33:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.017 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.019 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3953MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.019 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.020 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.117 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.118 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.298 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.322 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:33:34 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:33:34 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406111295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.829 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.841 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.854 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.856 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:33:34 compute-0 nova_compute[351485]: 2025-12-03 02:33:34.856 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:33:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:36 compute-0 podman[473015]: 2025-12-03 02:33:36.879193515 +0000 UTC m=+0.115561113 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:33:36 compute-0 nova_compute[351485]: 2025-12-03 02:33:36.889 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:36 compute-0 podman[473013]: 2025-12-03 02:33:36.892435848 +0000 UTC m=+0.138275963 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:33:36 compute-0 podman[473014]: 2025-12-03 02:33:36.899607861 +0000 UTC m=+0.143338676 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 02:33:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.857 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.857 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.858 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.897 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.897 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:37 compute-0 nova_compute[351485]: 2025-12-03 02:33:37.899 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:33:39 compute-0 nova_compute[351485]: 2025-12-03 02:33:39.300 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:39 compute-0 nova_compute[351485]: 2025-12-03 02:33:39.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:39 compute-0 nova_compute[351485]: 2025-12-03 02:33:39.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:41 compute-0 nova_compute[351485]: 2025-12-03 02:33:41.893 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:43 compute-0 systemd-logind[800]: New session 63 of user zuul.
Dec  3 02:33:43 compute-0 systemd[1]: Started Session 63 of User zuul.
Dec  3 02:33:43 compute-0 nova_compute[351485]: 2025-12-03 02:33:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:44 compute-0 nova_compute[351485]: 2025-12-03 02:33:44.302 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:44 compute-0 nova_compute[351485]: 2025-12-03 02:33:44.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:44 compute-0 nova_compute[351485]: 2025-12-03 02:33:44.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:33:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:45 compute-0 podman[473174]: 2025-12-03 02:33:45.867003538 +0000 UTC m=+0.119250906 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 02:33:46 compute-0 nova_compute[351485]: 2025-12-03 02:33:46.897 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:46 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15543 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:33:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809714596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:33:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:33:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809714596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:33:47 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15549 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  3 02:33:48 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1295810968' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  3 02:33:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:48 compute-0 podman[473343]: 2025-12-03 02:33:48.887870876 +0000 UTC m=+0.111147698 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:33:48 compute-0 podman[473342]: 2025-12-03 02:33:48.904694341 +0000 UTC m=+0.138293934 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc.)
Dec  3 02:33:48 compute-0 podman[473350]: 2025-12-03 02:33:48.919685844 +0000 UTC m=+0.124796343 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4)
Dec  3 02:33:48 compute-0 podman[473362]: 2025-12-03 02:33:48.923186623 +0000 UTC m=+0.118670610 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:33:48 compute-0 podman[473341]: 2025-12-03 02:33:48.923348727 +0000 UTC m=+0.165907493 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:33:49 compute-0 nova_compute[351485]: 2025-12-03 02:33:49.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:51 compute-0 ovs-vsctl[473476]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  3 02:33:51 compute-0 nova_compute[351485]: 2025-12-03 02:33:51.900 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:53 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  3 02:33:53 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  3 02:33:53 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  3 02:33:53 compute-0 nova_compute[351485]: 2025-12-03 02:33:53.325 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:33:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:54 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: cache status {prefix=cache status} (starting...)
Dec  3 02:33:54 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: client ls {prefix=client ls} (starting...)
Dec  3 02:33:54 compute-0 lvm[473802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 02:33:54 compute-0 lvm[473802]: VG ceph_vg0 finished
Dec  3 02:33:54 compute-0 nova_compute[351485]: 2025-12-03 02:33:54.307 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:54 compute-0 lvm[473814]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 02:33:54 compute-0 lvm[473814]: VG ceph_vg1 finished
Dec  3 02:33:54 compute-0 lvm[473878]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 02:33:54 compute-0 lvm[473878]: VG ceph_vg2 finished
Dec  3 02:33:54 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: damage ls {prefix=damage ls} (starting...)
Dec  3 02:33:54 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15553 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump loads {prefix=dump loads} (starting...)
Dec  3 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  3 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  3 02:33:55 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  3 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  3 02:33:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:55 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  3 02:33:55 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187494540' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  3 02:33:55 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  3 02:33:56 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  3 02:33:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:33:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028655004' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:33:56 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:56 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:33:56 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:33:56.339+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:33:56 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: ops {prefix=ops} (starting...)
Dec  3 02:33:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  3 02:33:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369402920' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  3 02:33:56 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  3 02:33:56 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099872981' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  3 02:33:56 compute-0 nova_compute[351485]: 2025-12-03 02:33:56.902 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:57 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: session ls {prefix=session ls} (starting...)
Dec  3 02:33:57 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: status {prefix=status} (starting...)
Dec  3 02:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 02:33:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172274940' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 02:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  3 02:33:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550125421' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  3 02:33:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:57 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 02:33:57 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693103312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 02:33:57 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15575 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 02:33:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577305353' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 02:33:58 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15579 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:33:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 02:33:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510990327' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:33:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  3 02:33:58 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676029562' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  3 02:33:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  3 02:33:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233607358' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  3 02:33:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  3 02:33:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3492666941' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  3 02:33:59 compute-0 nova_compute[351485]: 2025-12-03 02:33:59.314 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:33:59 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 02:33:59 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325094571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 02:33:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15591 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:33:59 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 02:33:59 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:33:59.640+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 02:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:33:59.670 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:33:59.670 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:33:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:33:59.670 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:33:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:33:59 compute-0 podman[158098]: time="2025-12-03T02:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:33:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8200 "" "Go-http-client/1.1"
Dec  3 02:33:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:34:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  3 02:34:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1313203538' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  3 02:34:00 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15597 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:34:00 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  3 02:34:00 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936925648' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  3 02:34:00 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15601 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:34:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 02:34:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2704523019' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105390080 unmapped: 3162112 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 3153920 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa5a5000/0x0/0x4ffc00000, data 0x25b276a/0x2679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 3145728 heap: 108552192 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334501 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.043724060s of 64.707359314s, submitted: 90
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec9400 session 0x558b8521f680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8c00 session 0x558b8634b2c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b8634a5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b84a93c00 session 0x558b85fc3a40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b85fc2f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b84a2cd20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8c00 session 0x558b84a2cf00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec9400 session 0x558b83a950e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa225000/0x0/0x4ffc00000, data 0x293276a/0x29f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b84a92000 session 0x558b8625ab40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b86260000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b85bbcb40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8c00 session 0x558b85bbcd20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec9400 session 0x558b85bbda40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105439232 unmapped: 9412608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359205 data_alloc: 234881024 data_used: 24854528
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b84a92400 session 0x558b83ca5c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105455616 unmapped: 9396224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b8624a5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8000 session 0x558b8624be00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105472000 unmapped: 9379840 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 9355264 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 9355264 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 6537216 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1383123 data_alloc: 251658240 data_used: 28000256
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5996544 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108855296 unmapped: 5996544 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108863488 unmapped: 5988352 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108896256 unmapped: 5955584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 5914624 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 5914624 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 5881856 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108978176 unmapped: 5873664 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1386963 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa224000/0x0/0x4ffc00000, data 0x293277a/0x29fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108986368 unmapped: 5865472 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.798915863s of 43.856277466s, submitted: 3
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 4931584 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1401303 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110108672 unmapped: 4743168 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 2359296 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 2359296 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112492544 unmapped: 2359296 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec6000/0x0/0x4ffc00000, data 0x2af077a/0x2bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 2326528 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414313 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 2326528 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec6000/0x0/0x4ffc00000, data 0x2af077a/0x2bb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2220032 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112640000 unmapped: 2211840 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 openstack_network_exporter[368278]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 2318336 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 2310144 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 2310144 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 2310144 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 2301952 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 2293760 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 2285568 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 2277376 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 2269184 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 2260992 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 2252800 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 2244608 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 2236416 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 2228224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 2228224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8ec5000/0x0/0x4ffc00000, data 0x2af177a/0x2bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 2228224 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1414577 data_alloc: 251658240 data_used: 28524544
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c47000 session 0x558b83d4ad20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 182.060043335s of 182.239807129s, submitted: 27
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c49800 session 0x558b84a2c3c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85e1c400 session 0x558b8624af00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 7315456 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83a73000 session 0x558b8634b2c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85e1c000 session 0x558b85bac780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 234881024 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107331584 unmapped: 7520256 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 7512064 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244362 data_alloc: 218103808 data_used: 19968000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c46c00 session 0x558b8625af00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 121.117050171s of 121.235237122s, submitted: 20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85707000 session 0x558b849b9860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9be3000/0x0/0x4ffc00000, data 0x1dd4708/0x1e9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b85ec8800 session 0x558b83ca30e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 7503872 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 10043392 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 ms_handle_reset con 0x558b83c49800 session 0x558b85bbc5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180429 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 9986048 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76a6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.548728943s of 39.757991791s, submitted: 30
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104562688 unmapped: 10289152 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182035 data_alloc: 218103808 data_used: 16781312
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104562688 unmapped: 10289152 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104587264 unmapped: 10264576 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104587264 unmapped: 10264576 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x18f76c9/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 18161664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b83a941e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238520 data_alloc: 218103808 data_used: 16789504
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9931000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238520 data_alloc: 218103808 data_used: 16789504
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b844081e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b85fc2960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b85fc25a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104628224 unmapped: 26058752 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9931000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8000 session 0x558b85fc21e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105807872 unmapped: 24879104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b8624a5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.587652206s of 14.710074425s, submitted: 13
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b8624be00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b84974780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b862fa5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 24707072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320516 data_alloc: 234881024 data_used: 16769024
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b84a92c00 session 0x558b862fb0e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b862faf00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b862fab40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b8634b0e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 25133056 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b8634a5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b84a93000 session 0x558b83c523c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 25272320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 25264128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 25264128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83a73000 session 0x558b83ca43c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105070592 unmapped: 25616384 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324327 data_alloc: 234881024 data_used: 16769024
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105086976 unmapped: 25600000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 105086976 unmapped: 25600000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 26066944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403339 data_alloc: 234881024 data_used: 27774976
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403339 data_alloc: 234881024 data_used: 27774976
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403339 data_alloc: 234881024 data_used: 27774976
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 19963904 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1403499 data_alloc: 234881024 data_used: 27779072
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 19955712 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f8e84000/0x0/0x4ffc00000, data 0x2b32c46/0x2bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b83c46c00 session 0x558b83ca2000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85707000 session 0x558b862fb4a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.913837433s of 27.059389114s, submitted: 26
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b85ec8800 session 0x558b85de2d20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 26017792 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 ms_handle_reset con 0x558b84a93400 session 0x558b84976000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 26001408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9932000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f9932000/0x0/0x4ffc00000, data 0x2084c46/0x214c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 26001408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 26001408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248005 data_alloc: 234881024 data_used: 16769024
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 104693760 unmapped: 25993216 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 ms_handle_reset con 0x558b83a73000 session 0x558b85278f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x18fadf4/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181099 data_alloc: 218103808 data_used: 9981952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x18fadf4/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181099 data_alloc: 218103808 data_used: 9981952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 ms_handle_reset con 0x558b85e1d400 session 0x558b8624ab40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x18fadf4/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 ms_handle_reset con 0x558b85707000 session 0x558b862603c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.032296181s of 15.317586899s, submitted: 49
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 32047104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 32022528 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x18fc857/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 32022528 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x18fc857/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x18fc857/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 30973952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184738 data_alloc: 218103808 data_used: 9981952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 30973952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 32022528 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 ms_handle_reset con 0x558b85ec8800 session 0x558b86261e00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9929000/0x0/0x4ffc00000, data 0x2089dd4/0x2154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7002 writes, 28K keys, 7002 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7002 writes, 1484 syncs, 4.72 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 615 writes, 1901 keys, 615 commit groups, 1.0 writes per commit group, ingest: 1.36 MB, 0.00 MB/s#012Interval WAL: 615 writes, 283 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240968 data_alloc: 218103808 data_used: 9990144
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9929000/0x0/0x4ffc00000, data 0x2089dd4/0x2154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98738176 unmapped: 31948800 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.118351936s of 13.264735222s, submitted: 27
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 31932416 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240312 data_alloc: 218103808 data_used: 9990144
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 ms_handle_reset con 0x558b83c48000 session 0x558b84a2cd20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f992a000/0x0/0x4ffc00000, data 0x2089dd4/0x2154000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192646 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x18fffa5/0x19cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98369536 unmapped: 32317440 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195620 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.878738403s of 78.146438599s, submitted: 53
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 99450880 unmapped: 31236096 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 31907840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec8c00 session 0x558b8624b0e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec9400 session 0x558b85fc3c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b84a92800 session 0x558b86376d20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.205551147s of 100.784317017s, submitted: 90
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b83a94f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.514204025s of 18.552843094s, submitted: 8
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c47400 session 0x558b85fc23c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b843b6000 session 0x558b85bbdc20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c46c00 session 0x558b862aa3c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 33923072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b8634be00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.285675049s of 85.578956604s, submitted: 49
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49e8/0x98f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 38486016 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 134 ms_handle_reset con 0x558b83c46c00 session 0x558b85fc3860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92233728 unmapped: 38453248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 38936576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86376960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b84927860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8311e3c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8625a960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85278000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862841e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 53.474491119s of 53.753597260s, submitted: 32
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b86284f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b852443c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85244960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862fa780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b849b8f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166167 data_alloc: 218103808 data_used: 4714496
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 34283520 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86376d20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86377e00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8625a960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b84927860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b85fc3c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175957 data_alloc: 218103808 data_used: 4714496
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b85fc23c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634be00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbdc20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b83a94f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624b0e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b84a2cd20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b862841e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86284f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9592000/0x0/0x4ffc00000, data 0x20081dd/0x20dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b86376000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96509952 unmapped: 34177024 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec9c00 session 0x558b86376b40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 34193408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b863763c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.407323837s of 10.792876244s, submitted: 47
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231863 data_alloc: 218103808 data_used: 4722688
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b862fbe00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b86261e00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b844090e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138000 session 0x558b844225a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b8625b4a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849b92c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x229020f/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634b860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 33857536 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262169 data_alloc: 218103808 data_used: 8060928
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b862faf00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b862fab40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97951744 unmapped: 32735232 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83ca2000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b83ca43c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291687 data_alloc: 234881024 data_used: 12001280
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 32292864 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9306000/0x0/0x4ffc00000, data 0x2290242/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317927 data_alloc: 234881024 data_used: 15679488
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624a780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b862fb0e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 29679616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.547910690s of 18.724098206s, submitted: 24
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b83dbb0e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 28319744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318572 data_alloc: 234881024 data_used: 17334272
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634a5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b867461e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.113079071s of 19.223537445s, submitted: 17
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 21757952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f85a5000/0x0/0x4ffc00000, data 0x2ff4200/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 21749760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8546000/0x0/0x4ffc00000, data 0x3053200/0x3128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415883 data_alloc: 234881024 data_used: 16392192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b867465a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b86746780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86746960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b86746b40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7c5f000/0x0/0x4ffc00000, data 0x393a200/0x3a0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522536 data_alloc: 234881024 data_used: 16547840
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b86746d20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b86747a40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b84410d20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 20561920 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b84408b40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b83c53680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530322 data_alloc: 234881024 data_used: 16613376
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b83c52960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.731391907s of 15.542462349s, submitted: 209
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530454 data_alloc: 234881024 data_used: 16613376
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528914 data_alloc: 234881024 data_used: 16613376
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 19881984 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 18202624 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.591274261s of 17.624994278s, submitted: 3
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 16539648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.202155113s of 10.243181229s, submitted: 7
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567254 data_alloc: 234881024 data_used: 21929984
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 16515072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86285680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b862852c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849272c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b86377e00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624be00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8521f680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8ebc000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336665 data_alloc: 234881024 data_used: 13844480
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336445 data_alloc: 234881024 data_used: 13844480
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.562313080s of 15.728222847s, submitted: 35
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 19046400 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113491968 unmapped: 17195008 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2af716b/0x2bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 16687104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386715 data_alloc: 234881024 data_used: 14835712
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.230500221s of 14.482069969s, submitted: 64
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85e1c800 session 0x558b8624af00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b85bbc3c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbcf00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.310770035s of 11.319570541s, submitted: 1
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b85bbda40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b85de6000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634b4a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b867461e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b867465a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b86746960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fa780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fb0e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624b680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8624a3c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497051 data_alloc: 234881024 data_used: 14835712
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b8624bc20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624a000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18735104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b83ca3a40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 18677760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112025600 unmapped: 18661376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 18612224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b83ca2000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502491 data_alloc: 234881024 data_used: 15368192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.251416206s of 11.426207542s, submitted: 28
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 18563072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2400 session 0x558b862abe00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504272 data_alloc: 234881024 data_used: 15368192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 18366464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521392 data_alloc: 234881024 data_used: 17547264
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 15892480 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 15409152 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.764178276s of 10.793314934s, submitted: 4
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1552128 data_alloc: 234881024 data_used: 21549056
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 13492224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 12271616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b861f5c00 session 0x558b8311e1e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b849774a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86285c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f833a000/0x0/0x4ffc00000, data 0x326017b/0x3333000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b849b9860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.645172119s of 11.731092453s, submitted: 27
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de3a40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 18604032 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85fc25a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.237525940s of 16.362010956s, submitted: 31
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8eab000/0x0/0x4ffc00000, data 0x26f1158/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,2])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 11460608 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 11444224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f850d000/0x0/0x4ffc00000, data 0x3081158/0x3153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 11288576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484722 data_alloc: 234881024 data_used: 17747968
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b84974000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea3400 session 0x558b863774a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478306 data_alloc: 234881024 data_used: 17756160
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114073600 unmapped: 16613376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f845f000/0x0/0x4ffc00000, data 0x313d158/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b8521e960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 33947648 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.629374504s of 10.309167862s, submitted: 151
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b83d11680
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88138400 session 0x558b84410d20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b84927e00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8634a780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b84926b40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b85244960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b852785a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b83c52780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8624bc20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624a000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624b4a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8c1c000/0x0/0x4ffc00000, data 0x297b13e/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 33832960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377459 data_alloc: 234881024 data_used: 11022336
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 33841152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139800 session 0x558b849274a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b862fa960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b8624af00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85a8bc00 session 0x558b849774a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98b9/0x20ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253805 data_alloc: 218103808 data_used: 4730880
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b85de61e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269378 data_alloc: 218103808 data_used: 6901760
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b86284d20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.745358467s of 14.380681038s, submitted: 113
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b83d4a1e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea3400 session 0x558b84a2cf00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2400 session 0x558b84927e00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b844101e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b84408000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294828 data_alloc: 218103808 data_used: 8527872
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3a1/0x2135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b8521ed20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.471420288s of 31.809257507s, submitted: 56
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 32382976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fdd000/0x0/0x4ffc00000, data 0x25b83c4/0x2691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 32808960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359190 data_alloc: 234881024 data_used: 9846784
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 32759808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369410 data_alloc: 234881024 data_used: 9990144
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 32505856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b19000/0x0/0x4ffc00000, data 0x2a7c3c4/0x2b55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.458605766s of 11.824682236s, submitted: 72
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404216 data_alloc: 234881024 data_used: 10186752
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 8914 writes, 35K keys, 8914 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8914 writes, 2261 syncs, 3.94 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1912 writes, 7094 keys, 1912 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 1912 writes, 777 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 32964608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: mgrc ms_handle_reset ms_handle_reset con 0x558b85ea3000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec  3 02:34:01 compute-0 ceph-osd[208731]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: mgrc handle_mgr_configure stats_period=5
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.394577026s of 18.451017380s, submitted: 14
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411668 data_alloc: 234881024 data_used: 10194944
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411304 data_alloc: 234881024 data_used: 10215424
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b85de74a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b862852c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b8634af00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 69.777755737s of 69.905036926s, submitted: 21
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 33890304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 33865728 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113647616 unmapped: 33824768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 33783808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 33742848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86746780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b86284b40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b849774a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 33734656 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86747a40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.146961212s of 54.832401276s, submitted: 108
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b84a2cf00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b83df8b40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85fc2b40
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b86285860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b8521e960
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b83d4a1e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b85278780
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85279e00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b84409860
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 33095680 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446880 data_alloc: 234881024 data_used: 15056896
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 29958144 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 29736960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.778945923s of 44.865009308s, submitted: 6
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 26681344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8588000/0x0/0x4ffc00000, data 0x300e3a1/0x30e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 28499968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3095444552' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 112.571006775s of 112.683082581s, submitted: 22
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 28418048 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.938446045s of 35.963088989s, submitted: 3
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 28311552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 189.647811890s of 189.655746460s, submitted: 1
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9225 writes, 35K keys, 9225 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9225 writes, 2410 syncs, 3.83 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 311 writes, 768 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 311 writes, 149 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 28196864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 163.277404785s of 163.285751343s, submitted: 1
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119308288 unmapped: 28164096 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 28147712 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 28123136 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 28057600 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.532619476s of 50.152313232s, submitted: 90
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139c00 session 0x558b83ca5c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ec6c00 session 0x558b849b8000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 28000256 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501242 data_alloc: 234881024 data_used: 18178048
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86260f00
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86284000
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b879ce000 session 0x558b862aa5a0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970065117s of 11.374399185s, submitted: 57
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b844112c0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.220892906s of 11.268563271s, submitted: 8
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 34070528 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 139 ms_handle_reset con 0x558b83c47000 session 0x558b863761e0
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x153eeba/0x1614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 41213952 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 140 ms_handle_reset con 0x558b85ec6c00 session 0x558b86285c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 41205760 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 41156608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b879ce000 session 0x558b85fc3c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.120633125s of 12.398234367s, submitted: 53
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b88139000 session 0x558b83ca3c20
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:01 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 40976384 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}'
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}'
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 41009152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:01 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:34:01 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}'
Dec  3 02:34:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:34:01 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15609 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:34:01 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 02:34:01 compute-0 nova_compute[351485]: 2025-12-03 02:34:01.904 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 02:34:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739375453' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 02:34:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15613 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 02:34:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101401782' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 02:34:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15617 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:34:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 02:34:02 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745656350' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 02:34:02 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15621 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 02:34:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  3 02:34:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405273975' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  3 02:34:03 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 02:34:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:34:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:34:04 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 02:34:04 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:34:04.296+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:34:04 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:34:04 compute-0 nova_compute[351485]: 2025-12-03 02:34:04.315 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:34:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  3 02:34:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247554239' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  3 02:34:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  3 02:34:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630472333' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  3 02:34:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  3 02:34:04 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1699661219' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  3 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  3 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50831080' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  3 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  3 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460830282' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  3 02:34:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  3 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331314447' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  3 02:34:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  3 02:34:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3311555939' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  3 02:34:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec  3 02:34:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914881831' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  3 02:34:06 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec  3 02:34:06 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908467597' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 6938624 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448525 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a7eb7e00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a858dc00 session 0x55f0a7eb74a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6b800 session 0x55f0a562fe00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 6930432 heap: 116023296 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75b7c20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f847a000/0x0/0x4ffc00000, data 0x312bb74/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.961769104s of 51.003173828s, submitted: 11
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a75b70e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a8020b40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a858dc00 session 0x55f0a6b8fa40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3b800 session 0x55f0a57e3680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 10035200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a57e23c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a8545680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a4d7d4a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 10002432 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a8593800 session 0x55f0a85710e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 9994240 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520933 data_alloc: 234881024 data_used: 20029440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110231552 unmapped: 9994240 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7c20000/0x0/0x4ffc00000, data 0x3988be6/0x3a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 9977856 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 9969664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3fc00 session 0x55f0a96b8960
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 10469376 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be4000/0x0/0x4ffc00000, data 0x39c4be6/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 10461184 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529030 data_alloc: 234881024 data_used: 20213760
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 10067968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be4000/0x0/0x4ffc00000, data 0x39c4be6/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 9052160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.774980545s of 13.003511429s, submitted: 38
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 7405568 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 7372800 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 7340032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 7340032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 7340032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 7307264 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 7299072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 7299072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 7290880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 7290880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 7290880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 7282688 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551026 data_alloc: 234881024 data_used: 23433216
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 7249920 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 7241728 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7be1000/0x0/0x4ffc00000, data 0x39c7be6/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 7241728 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.193262100s of 30.209300995s, submitted: 3
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575144 data_alloc: 234881024 data_used: 23506944
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 4890624 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76ec000/0x0/0x4ffc00000, data 0x3ebcbe6/0x3f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 5292032 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 5283840 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 5251072 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 5242880 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 6037504 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 6029312 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 6021120 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 6021120 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 6012928 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 6004736 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 5988352 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 5996544 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 5980160 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4d41400 session 0x55f0a58863c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3ec00 session 0x55f0a75c0960
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b3f400 session 0x55f0a4d7de00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 5971968 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 5963776 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 5955584 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 5947392 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 5939200 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114294784 unmapped: 5931008 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f76e1000/0x0/0x4ffc00000, data 0x3ec6be6/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1597544 data_alloc: 234881024 data_used: 23687168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 5922816 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 182.688186646s of 182.889434814s, submitted: 45
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4b40800 session 0x55f0a8570f00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a8b46f00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7450c00 session 0x55f0a8df7680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115359744 unmapped: 4866048 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a60c7400 session 0x55f0a7db6b40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7cfe000/0x0/0x4ffc00000, data 0x38acbc6/0x3970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a785b000 session 0x55f0a8173e00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115376128 unmapped: 4849664 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 4841472 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 4833280 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 4825088 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 4816896 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 4816896 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 4808704 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 4800512 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 4792320 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 4784128 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1541721 data_alloc: 234881024 data_used: 23597056
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x38a8bc6/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 4767744 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 120.761978149s of 121.249809265s, submitted: 73
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a810a780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6f800 session 0x55f0a57e2f00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6d800 session 0x55f0a97ec780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 4759552 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 ms_handle_reset con 0x55f0a7e6a000 session 0x55f0a7e365a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902e000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334755 data_alloc: 218103808 data_used: 16007168
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 8609792 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.468624115s of 39.815692902s, submitted: 54
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 8601600 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 8601600 heap: 120225792 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 25346048 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f902f000/0x0/0x4ffc00000, data 0x257cba3/0x263f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 25346048 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391308 data_alloc: 218103808 data_used: 16011264
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 25321472 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4ba3c00 session 0x55f0a7eb7e00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395258 data_alloc: 218103808 data_used: 16019456
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 25837568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a6b8fa40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a694b800 session 0x55f0a75bc780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a56ac1e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053081512s of 13.177683830s, submitted: 13
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a7e6e400 session 0x55f0a85aab40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 17645568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a858dc00 session 0x55f0a58870e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414858 data_alloc: 234881024 data_used: 22835200
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a756c780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 17645568 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f882c000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1,5])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a85ab2c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8590000 session 0x55f0a85445a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a8544780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b39000 session 0x55f0a85abc20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a8b47680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a8544d20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 17580032 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a6b8e780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8590000 session 0x55f0a5897860
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8595800 session 0x55f0a5538000
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a96b81e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a562f4a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4d40800 session 0x55f0a96b90e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 17530880 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8590000 session 0x55f0a56acd20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 17530880 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8595000 session 0x55f0a54dc000
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 17530880 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a5542960
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467161 data_alloc: 234881024 data_used: 22835200
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 17227776 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b40c00 session 0x55f0a7e36f00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82e0000/0x0/0x4ffc00000, data 0x32c5826/0x338e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 17227776 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 17211392 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 17211392 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119832576 unmapped: 17178624 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1480806 data_alloc: 234881024 data_used: 24236032
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119889920 unmapped: 17121280 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 16646144 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 16646144 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 16646144 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503206 data_alloc: 234881024 data_used: 27332608
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503206 data_alloc: 234881024 data_used: 27332608
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1503206 data_alloc: 234881024 data_used: 27332608
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f82bc000/0x0/0x4ffc00000, data 0x32e9826/0x33b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 16842752 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 16834560 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a7db50e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.228794098s of 28.534566879s, submitted: 48
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a60c9400 session 0x55f0a6acef00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a8595400 session 0x55f0a75bbc20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 16809984 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 ms_handle_reset con 0x55f0a4b3d800 session 0x55f0a85701e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 17809408 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425718 data_alloc: 234881024 data_used: 22835200
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 17809408 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f8828000/0x0/0x4ffc00000, data 0x2d7e720/0x2e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 17809408 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 17801216 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f8828000/0x0/0x4ffc00000, data 0x2d802f1/0x2e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 ms_handle_reset con 0x55f0a694b000 session 0x55f0a75bad20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354716 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9028000/0x0/0x4ffc00000, data 0x25802f1/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 24453120 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354716 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.647365570s of 15.350210190s, submitted: 100
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8945 writes, 34K keys, 8945 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8945 writes, 2107 syncs, 4.25 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1124 writes, 3389 keys, 1124 commit groups, 1.0 writes per commit group, ingest: 2.51 MB, 0.00 MB/s#012Interval WAL: 1124 writes, 495 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9028000/0x0/0x4ffc00000, data 0x25802f1/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 24444928 heap: 137011200 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357690 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 120954880 unmapped: 24453120 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f8825000/0x0/0x4ffc00000, data 0x2d81d54/0x2e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 ms_handle_reset con 0x55f0a56d1400 session 0x55f0a75c34a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f8825000/0x0/0x4ffc00000, data 0x2d81d54/0x2e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417328 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x2d838f4/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417328 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x2d838f4/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.317202568s of 13.401175499s, submitted: 16
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x2d838f4/0x2e4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a75bba40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364910 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f901f000/0x0/0x4ffc00000, data 0x25854a2/0x264e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 32849920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 32841728 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 32833536 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 32825344 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901c000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 77.925277710s of 78.037933350s, submitted: 34
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 32817152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 32784384 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 32727040 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 32718848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 32710656 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75bd0e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.323188782s of 99.935211182s, submitted: 90
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a529c960
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3a800 session 0x55f0a57e3860
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112738304 unmapped: 32669696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9d79000/0x0/0x4ffc00000, data 0x182af05/0x18f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75c1680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.926574707s of 19.366115570s, submitted: 65
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3dc00 session 0x55f0a7f5bc20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a57ab000 session 0x55f0a56af2c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 38379520 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a553f680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 85.550582886s of 85.862503052s, submitted: 51
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 134 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbbc20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107044864 unmapped: 38363136 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142798 data_alloc: 218103808 data_used: 7061504
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a4d42b40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a7ecc1e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a7ecc3c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 31768576 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167396 data_alloc: 218103808 data_used: 13885440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.427654266s of 53.600463867s, submitted: 15
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 27410432 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a726d860
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a726d680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a553e000
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a75b83c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 31604736 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbba40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a64152c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a80205a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a756c780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a54dcf00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6a400 session 0x55f0a56aed20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303610 data_alloc: 218103808 data_used: 13885440
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a54dd4a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a8b46780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 31432704 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a8df6b40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a4d7d680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8592800 session 0x55f0a5578d20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a4b37c20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a84fa780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a7eb74a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 31105024 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a52a65a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a885b2c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114327552 unmapped: 31080448 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114343936 unmapped: 31064064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400757 data_alloc: 218103808 data_used: 13889536
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 31055872 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290942192s of 10.699803352s, submitted: 40
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a58825a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e71400 session 0x55f0a810bc20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a81732c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467170 data_alloc: 218103808 data_used: 13893632
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 31907840 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113508352 unmapped: 31899648 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 31236096 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516450 data_alloc: 234881024 data_used: 20791296
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7450400 session 0x55f0a7eb72c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 28491776 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.427840233s of 10.600404739s, submitted: 22
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 23896064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 125722624 unmapped: 19685376 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 14385152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680281 data_alloc: 251658240 data_used: 40378368
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a96b8f00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a7eced20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 12574720 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a6afd680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1544478 data_alloc: 234881024 data_used: 33796096
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 15441920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133218304 unmapped: 12189696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83f5000/0x0/0x4ffc00000, data 0x31ad55e/0x3279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a57e6780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.212381363s of 13.371566772s, submitted: 31
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7ec3c00 session 0x55f0a8571e00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596346 data_alloc: 251658240 data_used: 41136128
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 14958592 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594c00 session 0x55f0a57e23c0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.805484772s of 13.987822533s, submitted: 30
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533936 data_alloc: 234881024 data_used: 33460224
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85de000/0x0/0x4ffc00000, data 0x2fc455e/0x3090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585670 data_alloc: 234881024 data_used: 34004992
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134864896 unmapped: 10543104 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8073000/0x0/0x4ffc00000, data 0x352955e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858d800 session 0x55f0a60fc780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135520256 unmapped: 20389888 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 18022400 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f780e000/0x0/0x4ffc00000, data 0x3d8555e/0x3e51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667382 data_alloc: 251658240 data_used: 34631680
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a4b37e00
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8597000 session 0x55f0a810bc20
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.885555267s of 13.601085663s, submitted: 132
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 18956288 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f781b000/0x0/0x4ffc00000, data 0x3d8755e/0x3e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594000 session 0x55f0a810b0e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a810a000
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665953 data_alloc: 251658240 data_used: 34635776
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f6000/0x0/0x4ffc00000, data 0x3dab56e/0x3e78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665621 data_alloc: 251658240 data_used: 34635776
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 18833408 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 17981440 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695221 data_alloc: 251658240 data_used: 38678528
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 16302080 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717461 data_alloc: 251658240 data_used: 41824256
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.360565186s of 30.452342987s, submitted: 11
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717637 data_alloc: 251658240 data_used: 41824256
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 15360000 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a8df70e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a81721e0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 15335424 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591400 session 0x55f0a75ba780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6cc00 session 0x55f0a81734a0
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b41800 session 0x55f0a6414780
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1549753 data_alloc: 234881024 data_used: 34029568
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 19603456 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136134656 unmapped: 19775488 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:34:06 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550873 data_alloc: 234881024 data_used: 34156544
Dec  3 02:34:06 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:34:06 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.911537170s of 15.160791397s, submitted: 53
Dec  3 02:36:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:16 compute-0 rsyslogd[188612]: imjournal: 16637 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 02:36:16 compute-0 nova_compute[351485]: 2025-12-03 02:36:16.985 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:19 compute-0 nova_compute[351485]: 2025-12-03 02:36:19.399 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:19 compute-0 podman[482902]: 2025-12-03 02:36:19.894685068 +0000 UTC m=+0.151051894 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:36:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:21 compute-0 nova_compute[351485]: 2025-12-03 02:36:21.988 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:22 compute-0 podman[482921]: 2025-12-03 02:36:22.876781443 +0000 UTC m=+0.116722335 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, version=9.6, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  3 02:36:22 compute-0 podman[482925]: 2025-12-03 02:36:22.893286179 +0000 UTC m=+0.112241639 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 02:36:22 compute-0 podman[482922]: 2025-12-03 02:36:22.89828424 +0000 UTC m=+0.130760712 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:36:22 compute-0 podman[482923]: 2025-12-03 02:36:22.911109592 +0000 UTC m=+0.146229778 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Dec  3 02:36:22 compute-0 podman[482920]: 2025-12-03 02:36:22.916389681 +0000 UTC m=+0.162497687 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 02:36:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:24 compute-0 nova_compute[351485]: 2025-12-03 02:36:24.402 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:25 compute-0 nova_compute[351485]: 2025-12-03 02:36:25.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:26 compute-0 nova_compute[351485]: 2025-12-03 02:36:26.991 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:36:28
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'volumes']
Dec  3 02:36:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:36:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:36:29 compute-0 nova_compute[351485]: 2025-12-03 02:36:29.406 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:29 compute-0 podman[158098]: time="2025-12-03T02:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:36:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec  3 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:36:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:36:31 compute-0 openstack_network_exporter[368278]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:36:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:36:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:31 compute-0 nova_compute[351485]: 2025-12-03 02:36:31.994 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:34 compute-0 nova_compute[351485]: 2025-12-03 02:36:34.407 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:36 compute-0 nova_compute[351485]: 2025-12-03 02:36:36.997 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.670 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.671 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.671 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.671 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:36:37 compute-0 nova_compute[351485]: 2025-12-03 02:36:37.672 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:36:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:36:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452019504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.218 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.716 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.718 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3949MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.718 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.718 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:36:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.803 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.803 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:36:38 compute-0 nova_compute[351485]: 2025-12-03 02:36:38.820 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:36:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:36:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017010438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.331 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.346 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.365 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.368 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.369 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:36:39 compute-0 nova_compute[351485]: 2025-12-03 02:36:39.409 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:36:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0b91da6f-62bd-43e6-af92-4a312f8e24b0 does not exist
Dec  3 02:36:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 24e99ec4-2549-4ae5-b227-9a173cd6bb42 does not exist
Dec  3 02:36:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 31d7cbd6-73a9-47f6-b65f-37c5a89ca665 does not exist
Dec  3 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.370 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.371 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.371 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:36:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:36:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:36:40 compute-0 nova_compute[351485]: 2025-12-03 02:36:40.396 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:36:40 compute-0 podman[483225]: 2025-12-03 02:36:40.673909537 +0000 UTC m=+0.112404753 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:36:40 compute-0 podman[483226]: 2025-12-03 02:36:40.698954104 +0000 UTC m=+0.126364127 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 02:36:40 compute-0 podman[483227]: 2025-12-03 02:36:40.704616594 +0000 UTC m=+0.128624911 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:36:40 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.536325505 +0000 UTC m=+0.085265807 container create 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.502652014 +0000 UTC m=+0.051592366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:36:41 compute-0 nova_compute[351485]: 2025-12-03 02:36:41.596 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:41 compute-0 systemd[1]: Started libpod-conmon-443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931.scope.
Dec  3 02:36:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:36:41 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 02:36:41 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.713257998 +0000 UTC m=+0.262198320 container init 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.731923165 +0000 UTC m=+0.280863457 container start 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:36:41 compute-0 compassionate_elgamal[483414]: 167 167
Dec  3 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.744009626 +0000 UTC m=+0.292949918 container attach 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 02:36:41 compute-0 systemd[1]: libpod-443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931.scope: Deactivated successfully.
Dec  3 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.746980539 +0000 UTC m=+0.295920831 container died 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:36:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d7152e130522d0ab0b5c0ddadb61bb5afe3af1824a80c1e87ff46198567a791-merged.mount: Deactivated successfully.
Dec  3 02:36:41 compute-0 podman[483398]: 2025-12-03 02:36:41.82850704 +0000 UTC m=+0.377447332 container remove 443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 02:36:41 compute-0 systemd[1]: libpod-conmon-443655007d3fefaf20a215adc0f9f5d6b9b2ab82686c0a57d25f45bc67139931.scope: Deactivated successfully.
Dec  3 02:36:42 compute-0 nova_compute[351485]: 2025-12-03 02:36:42.000 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.078174185 +0000 UTC m=+0.082449488 container create 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.042632882 +0000 UTC m=+0.046908245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:36:42 compute-0 systemd[1]: Started libpod-conmon-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope.
Dec  3 02:36:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.272490088 +0000 UTC m=+0.276765421 container init 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.30832794 +0000 UTC m=+0.312603243 container start 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:36:42 compute-0 podman[483438]: 2025-12-03 02:36:42.315308597 +0000 UTC m=+0.319583890 container attach 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:36:42 compute-0 nova_compute[351485]: 2025-12-03 02:36:42.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:43 compute-0 nova_compute[351485]: 2025-12-03 02:36:43.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:43 compute-0 nova_compute[351485]: 2025-12-03 02:36:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:43 compute-0 sweet_swanson[483453]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:36:43 compute-0 sweet_swanson[483453]: --> relative data size: 1.0
Dec  3 02:36:43 compute-0 sweet_swanson[483453]: --> All data devices are unavailable
Dec  3 02:36:43 compute-0 systemd[1]: libpod-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope: Deactivated successfully.
Dec  3 02:36:43 compute-0 podman[483438]: 2025-12-03 02:36:43.701195876 +0000 UTC m=+1.705471149 container died 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:36:43 compute-0 systemd[1]: libpod-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope: Consumed 1.345s CPU time.
Dec  3 02:36:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c89260d352e479b6ed4e7c4052c9bec433698c1ef25ce32d9a8bc4f477fb5e3d-merged.mount: Deactivated successfully.
Dec  3 02:36:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:43 compute-0 podman[483438]: 2025-12-03 02:36:43.793008087 +0000 UTC m=+1.797283380 container remove 1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:36:43 compute-0 systemd[1]: libpod-conmon-1321e0d12a39a9116d6ad476ff4f479f14a53f2c8d151b60bcfd40a630093c5f.scope: Deactivated successfully.
Dec  3 02:36:44 compute-0 nova_compute[351485]: 2025-12-03 02:36:44.411 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.024048737 +0000 UTC m=+0.094457146 container create 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:44.990237263 +0000 UTC m=+0.060645722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:36:45 compute-0 systemd[1]: Started libpod-conmon-44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447.scope.
Dec  3 02:36:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.170171251 +0000 UTC m=+0.240579720 container init 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.188895169 +0000 UTC m=+0.259303588 container start 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.195595158 +0000 UTC m=+0.266003617 container attach 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:36:45 compute-0 gracious_lamarr[483648]: 167 167
Dec  3 02:36:45 compute-0 systemd[1]: libpod-44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447.scope: Deactivated successfully.
Dec  3 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.20167916 +0000 UTC m=+0.272087609 container died 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ca113133e7f2a0563d56180d855de122728c7f375f13230a978fe2cfa79698f-merged.mount: Deactivated successfully.
Dec  3 02:36:45 compute-0 podman[483633]: 2025-12-03 02:36:45.289309233 +0000 UTC m=+0.359717652 container remove 44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 02:36:45 compute-0 systemd[1]: libpod-conmon-44bc9686b8be7719739157f46b8637c043ec4791d8406ebef7218499ce881447.scope: Deactivated successfully.
Dec  3 02:36:45 compute-0 nova_compute[351485]: 2025-12-03 02:36:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.591329565 +0000 UTC m=+0.091592386 container create 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.558844268 +0000 UTC m=+0.059107079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:36:45 compute-0 systemd[1]: Started libpod-conmon-0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a.scope.
Dec  3 02:36:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.77719475 +0000 UTC m=+0.277457591 container init 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:36:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.802098903 +0000 UTC m=+0.302361714 container start 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:36:45 compute-0 podman[483673]: 2025-12-03 02:36:45.809047399 +0000 UTC m=+0.309310230 container attach 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.981053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729405981074, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1615, "num_deletes": 251, "total_data_size": 2452599, "memory_usage": 2489040, "flush_reason": "Manual Compaction"}
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729405994777, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2405945, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49404, "largest_seqno": 51018, "table_properties": {"data_size": 2398356, "index_size": 4467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16432, "raw_average_key_size": 20, "raw_value_size": 2382916, "raw_average_value_size": 2960, "num_data_blocks": 199, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729249, "oldest_key_time": 1764729249, "file_creation_time": 1764729405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 13769 microseconds, and 5276 cpu microseconds.
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.994820) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2405945 bytes OK
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.994834) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.996924) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.996935) EVENT_LOG_v1 {"time_micros": 1764729405996932, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.996948) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2445435, prev total WAL file size 2445435, number of live WAL files 2.
Dec  3 02:36:45 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.997839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2349KB)], [119(6887KB)]
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729405997917, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 9458407, "oldest_snapshot_seqno": -1}
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 6521 keys, 7724819 bytes, temperature: kUnknown
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729406060653, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 7724819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7685166, "index_size": 22263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 170533, "raw_average_key_size": 26, "raw_value_size": 7571105, "raw_average_value_size": 1161, "num_data_blocks": 878, "num_entries": 6521, "num_filter_entries": 6521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.061387) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 7724819 bytes
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.064666) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.5 rd, 122.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.1) write-amplify(3.2) OK, records in: 7035, records dropped: 514 output_compression: NoCompression
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.064703) EVENT_LOG_v1 {"time_micros": 1764729406064685, "job": 72, "event": "compaction_finished", "compaction_time_micros": 63262, "compaction_time_cpu_micros": 37005, "output_level": 6, "num_output_files": 1, "total_output_size": 7724819, "num_input_records": 7035, "num_output_records": 6521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729406068136, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729406072166, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:45.997550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:36:46 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:36:46.073287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]: {
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:    "0": [
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:        {
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "devices": [
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "/dev/loop3"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            ],
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_name": "ceph_lv0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_size": "21470642176",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "name": "ceph_lv0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "tags": {
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cluster_name": "ceph",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.crush_device_class": "",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.encrypted": "0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osd_id": "0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.type": "block",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.vdo": "0"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            },
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "type": "block",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "vg_name": "ceph_vg0"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:        }
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:    ],
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:    "1": [
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:        {
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "devices": [
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "/dev/loop4"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            ],
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_name": "ceph_lv1",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_size": "21470642176",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "name": "ceph_lv1",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "tags": {
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cluster_name": "ceph",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.crush_device_class": "",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.encrypted": "0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osd_id": "1",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.type": "block",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.vdo": "0"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            },
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "type": "block",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "vg_name": "ceph_vg1"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:        }
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:    ],
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:    "2": [
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:        {
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "devices": [
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "/dev/loop5"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            ],
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_name": "ceph_lv2",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_size": "21470642176",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "name": "ceph_lv2",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "tags": {
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.cluster_name": "ceph",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.crush_device_class": "",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.encrypted": "0",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osd_id": "2",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.type": "block",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:                "ceph.vdo": "0"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            },
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "type": "block",
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:            "vg_name": "ceph_vg2"
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:        }
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]:    ]
Dec  3 02:36:46 compute-0 nervous_ptolemy[483688]: }
Dec  3 02:36:46 compute-0 systemd[1]: libpod-0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a.scope: Deactivated successfully.
Dec  3 02:36:46 compute-0 podman[483673]: 2025-12-03 02:36:46.659680924 +0000 UTC m=+1.159943745 container died 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf2bbd03bcd3392f0222bf82c153bedf13a5cc169d44eb9aac46f8d2eee504e5-merged.mount: Deactivated successfully.
Dec  3 02:36:46 compute-0 podman[483673]: 2025-12-03 02:36:46.762142495 +0000 UTC m=+1.262405286 container remove 0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ptolemy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:36:46 compute-0 systemd[1]: libpod-conmon-0d8bc2f6174914a2d9a1e3d90d2376d670e34b4e2f7b32c74e9178750676ec1a.scope: Deactivated successfully.
Dec  3 02:36:47 compute-0 nova_compute[351485]: 2025-12-03 02:36:47.004 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:36:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942960392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:36:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:36:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/942960392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:36:47 compute-0 nova_compute[351485]: 2025-12-03 02:36:47.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:36:47 compute-0 nova_compute[351485]: 2025-12-03 02:36:47.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:36:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:47 compute-0 podman[483843]: 2025-12-03 02:36:47.89343003 +0000 UTC m=+0.071612832 container create de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:36:47 compute-0 systemd[1]: Started libpod-conmon-de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515.scope.
Dec  3 02:36:47 compute-0 podman[483843]: 2025-12-03 02:36:47.872969333 +0000 UTC m=+0.051152135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:36:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.035782847 +0000 UTC m=+0.213965699 container init de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.053644491 +0000 UTC m=+0.231827293 container start de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.061231266 +0000 UTC m=+0.239414078 container attach de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 02:36:48 compute-0 happy_yalow[483859]: 167 167
Dec  3 02:36:48 compute-0 systemd[1]: libpod-de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515.scope: Deactivated successfully.
Dec  3 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.068090659 +0000 UTC m=+0.246273471 container died de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52a5c88f2676d11979354023c450777e5d3d9f2da7439dd53bccdf967564236-merged.mount: Deactivated successfully.
Dec  3 02:36:48 compute-0 podman[483843]: 2025-12-03 02:36:48.136937432 +0000 UTC m=+0.315120254 container remove de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:36:48 compute-0 systemd[1]: libpod-conmon-de27f91152604cc26e74ec581c6203ecdef1d8c9ed039f3db02f260908162515.scope: Deactivated successfully.
Dec  3 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.39201201 +0000 UTC m=+0.089014383 container create f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.358025411 +0000 UTC m=+0.055027784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:36:48 compute-0 systemd[1]: Started libpod-conmon-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope.
Dec  3 02:36:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.559481486 +0000 UTC m=+0.256483879 container init f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.586589621 +0000 UTC m=+0.283592034 container start f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  3 02:36:48 compute-0 podman[483882]: 2025-12-03 02:36:48.59329277 +0000 UTC m=+0.290295183 container attach f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  3 02:36:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:49 compute-0 nova_compute[351485]: 2025-12-03 02:36:49.414 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]: {
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "osd_id": 2,
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "type": "bluestore"
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:    },
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "osd_id": 1,
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "type": "bluestore"
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:    },
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "osd_id": 0,
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:        "type": "bluestore"
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]:    }
Dec  3 02:36:49 compute-0 hopeful_satoshi[483898]: }
Dec  3 02:36:49 compute-0 systemd[1]: libpod-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope: Deactivated successfully.
Dec  3 02:36:49 compute-0 podman[483882]: 2025-12-03 02:36:49.78680999 +0000 UTC m=+1.483812363 container died f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:36:49 compute-0 systemd[1]: libpod-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope: Consumed 1.200s CPU time.
Dec  3 02:36:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a9c1c783a81069a04f2b3350348783b3f2236f8325e448b80fea9e4d209af04-merged.mount: Deactivated successfully.
Dec  3 02:36:49 compute-0 podman[483882]: 2025-12-03 02:36:49.893778969 +0000 UTC m=+1.590781332 container remove f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 02:36:49 compute-0 systemd[1]: libpod-conmon-f70a65956f1784f638ac70dfe972b9b27a00acc027ed6ff1901cc6884d4cb065.scope: Deactivated successfully.
Dec  3 02:36:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:36:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:36:49 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:36:49 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:36:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aab8e85d-a638-4482-990b-b01009e8c470 does not exist
Dec  3 02:36:49 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b93fe401-0a0e-4ae1-8a1b-8ff79f74ee1a does not exist
Dec  3 02:36:50 compute-0 podman[483944]: 2025-12-03 02:36:50.054313029 +0000 UTC m=+0.113598387 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:36:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:36:50 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:36:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:52 compute-0 nova_compute[351485]: 2025-12-03 02:36:52.009 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:53 compute-0 podman[484014]: 2025-12-03 02:36:53.855382435 +0000 UTC m=+0.100116686 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:36:53 compute-0 podman[484016]: 2025-12-03 02:36:53.881990756 +0000 UTC m=+0.107594087 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:36:53 compute-0 podman[484015]: 2025-12-03 02:36:53.887701447 +0000 UTC m=+0.123817565 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, name=ubi9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 02:36:53 compute-0 podman[484013]: 2025-12-03 02:36:53.889327623 +0000 UTC m=+0.134553768 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec  3 02:36:53 compute-0 podman[484012]: 2025-12-03 02:36:53.920906634 +0000 UTC m=+0.166545281 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Dec  3 02:36:54 compute-0 nova_compute[351485]: 2025-12-03 02:36:54.418 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:57 compute-0 nova_compute[351485]: 2025-12-03 02:36:57.014 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:36:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:36:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:36:59 compute-0 nova_compute[351485]: 2025-12-03 02:36:59.422 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:36:59.674 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:36:59.675 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:36:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:36:59.675 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:36:59 compute-0 podman[158098]: time="2025-12-03T02:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:36:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8212 "" "Go-http-client/1.1"
Dec  3 02:36:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:37:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:37:01 compute-0 openstack_network_exporter[368278]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:37:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:37:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:02 compute-0 nova_compute[351485]: 2025-12-03 02:37:02.017 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:04 compute-0 nova_compute[351485]: 2025-12-03 02:37:04.426 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:07 compute-0 nova_compute[351485]: 2025-12-03 02:37:07.020 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:09 compute-0 nova_compute[351485]: 2025-12-03 02:37:09.429 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:10 compute-0 podman[484116]: 2025-12-03 02:37:10.869615393 +0000 UTC m=+0.112672521 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 02:37:10 compute-0 podman[484118]: 2025-12-03 02:37:10.876079165 +0000 UTC m=+0.111796766 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:37:10 compute-0 podman[484117]: 2025-12-03 02:37:10.87945678 +0000 UTC m=+0.120151681 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  3 02:37:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:12 compute-0 nova_compute[351485]: 2025-12-03 02:37:12.023 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:14 compute-0 nova_compute[351485]: 2025-12-03 02:37:14.432 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:17 compute-0 nova_compute[351485]: 2025-12-03 02:37:17.026 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:19 compute-0 nova_compute[351485]: 2025-12-03 02:37:19.438 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.517 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.518 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.518 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.520 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.522 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.537 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.538 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:37:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:37:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:20 compute-0 podman[484174]: 2025-12-03 02:37:20.877758841 +0000 UTC m=+0.127936132 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:37:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:22 compute-0 nova_compute[351485]: 2025-12-03 02:37:22.030 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:24 compute-0 nova_compute[351485]: 2025-12-03 02:37:24.439 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:24 compute-0 podman[484197]: 2025-12-03 02:37:24.852733092 +0000 UTC m=+0.086529092 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:37:24 compute-0 podman[484198]: 2025-12-03 02:37:24.875355421 +0000 UTC m=+0.102289787 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, release-0.7.12=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:37:24 compute-0 podman[484196]: 2025-12-03 02:37:24.878591702 +0000 UTC m=+0.119380889 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc.)
Dec  3 02:37:24 compute-0 podman[484202]: 2025-12-03 02:37:24.8799228 +0000 UTC m=+0.102834992 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:37:24 compute-0 podman[484195]: 2025-12-03 02:37:24.90722159 +0000 UTC m=+0.153265215 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:37:25 compute-0 nova_compute[351485]: 2025-12-03 02:37:25.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:27 compute-0 nova_compute[351485]: 2025-12-03 02:37:27.033 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:37:28
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'backups', 'vms', 'default.rgw.control']
Dec  3 02:37:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:37:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:37:29 compute-0 nova_compute[351485]: 2025-12-03 02:37:29.442 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:29 compute-0 podman[158098]: time="2025-12-03T02:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:37:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec  3 02:37:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:37:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:37:31 compute-0 openstack_network_exporter[368278]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:37:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:37:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:32 compute-0 nova_compute[351485]: 2025-12-03 02:37:32.036 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:34 compute-0 nova_compute[351485]: 2025-12-03 02:37:34.446 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.038 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.614 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.616 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.616 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:37:37 compute-0 nova_compute[351485]: 2025-12-03 02:37:37.617 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:37:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:37:38 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670775599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.156 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.599 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.600 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3976MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.601 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.601 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:37:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.810 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.811 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:37:38 compute-0 nova_compute[351485]: 2025-12-03 02:37:38.909 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:37:39 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:37:39 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376766912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.431 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.445 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.452 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.475 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.479 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:37:39 compute-0 nova_compute[351485]: 2025-12-03 02:37:39.479 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:37:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:37:40 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1352 writes, 6191 keys, 1352 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s#012Interval WAL: 1352 writes, 1352 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    100.2      0.65              0.30        36    0.018       0      0       0.0       0.0#012  L6      1/0    7.37 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.1    133.8    110.2      2.43              1.18        35    0.069    194K    19K       0.0       0.0#012 Sum      1/0    7.37 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.1    105.7    108.1      3.08              1.48        71    0.043    194K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4    116.3    119.1      0.41              0.19        10    0.041     33K   2561       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    133.8    110.2      2.43              1.18        35    0.069    194K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    100.6      0.64              0.30        35    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.07 MB/s write, 0.32 GB read, 0.07 MB/s read, 3.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559a0b5b71f0#2 capacity: 304.00 MB usage: 40.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000546 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2797,38.99 MB,12.827%) FilterBlock(72,541.48 KB,0.173945%) IndexBlock(72,888.86 KB,0.285535%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.480 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.480 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.481 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.501 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:37:40 compute-0 nova_compute[351485]: 2025-12-03 02:37:40.502 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:41 compute-0 nova_compute[351485]: 2025-12-03 02:37:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:41 compute-0 nova_compute[351485]: 2025-12-03 02:37:41.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:37:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:41 compute-0 podman[484340]: 2025-12-03 02:37:41.870589892 +0000 UTC m=+0.121431617 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:37:41 compute-0 podman[484342]: 2025-12-03 02:37:41.876250572 +0000 UTC m=+0.114614185 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:37:41 compute-0 podman[484341]: 2025-12-03 02:37:41.892404338 +0000 UTC m=+0.137275825 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec  3 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.041 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.592 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.592 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.593 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:37:42 compute-0 nova_compute[351485]: 2025-12-03 02:37:42.618 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:37:43 compute-0 nova_compute[351485]: 2025-12-03 02:37:43.602 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:44 compute-0 nova_compute[351485]: 2025-12-03 02:37:44.452 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:44 compute-0 nova_compute[351485]: 2025-12-03 02:37:44.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:46 compute-0 nova_compute[351485]: 2025-12-03 02:37:46.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:47 compute-0 nova_compute[351485]: 2025-12-03 02:37:47.044 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:37:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/304231738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:37:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:37:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/304231738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:37:47 compute-0 nova_compute[351485]: 2025-12-03 02:37:47.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:37:47 compute-0 nova_compute[351485]: 2025-12-03 02:37:47.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:37:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:49 compute-0 nova_compute[351485]: 2025-12-03 02:37:49.455 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:51 compute-0 podman[484540]: 2025-12-03 02:37:51.488036971 +0000 UTC m=+0.127146009 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:37:51 compute-0 podman[484585]: 2025-12-03 02:37:51.700183288 +0000 UTC m=+0.127867989 container exec d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:37:51 compute-0 podman[484585]: 2025-12-03 02:37:51.823378295 +0000 UTC m=+0.251062906 container exec_died d4928ec355dde4f9832925371e530bcf9c3ae726293bfc429bb0df335de5c38b (image=quay.io/ceph/ceph:v18, name=ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mon-compute-0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 02:37:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:52 compute-0 nova_compute[351485]: 2025-12-03 02:37:52.047 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:37:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:37:52 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:37:52 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:37:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:37:53 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:37:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:37:54 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 84bb91c1-e103-44e1-8d9a-3c014b347111 does not exist
Dec  3 02:37:54 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e5b984aa-e0f1-4827-8dec-f563ca0cc057 does not exist
Dec  3 02:37:54 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 56960e68-9c7d-4699-8448-650ce03aeb03 does not exist
Dec  3 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:37:54 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:37:54 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:37:54 compute-0 nova_compute[351485]: 2025-12-03 02:37:54.457 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:37:55 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.400717616 +0000 UTC m=+0.076768267 container create e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.373486058 +0000 UTC m=+0.049536719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:37:55 compute-0 systemd[1]: Started libpod-conmon-e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e.scope.
Dec  3 02:37:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.560502355 +0000 UTC m=+0.236552986 container init e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.572125783 +0000 UTC m=+0.248176404 container start e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.576428255 +0000 UTC m=+0.252478896 container attach e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:37:55 compute-0 amazing_banach[485046]: 167 167
Dec  3 02:37:55 compute-0 systemd[1]: libpod-e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e.scope: Deactivated successfully.
Dec  3 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.582965719 +0000 UTC m=+0.259016340 container died e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  3 02:37:55 compute-0 podman[485019]: 2025-12-03 02:37:55.593809025 +0000 UTC m=+0.116744465 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  3 02:37:55 compute-0 podman[485021]: 2025-12-03 02:37:55.595279767 +0000 UTC m=+0.118065323 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  3 02:37:55 compute-0 podman[485020]: 2025-12-03 02:37:55.607140921 +0000 UTC m=+0.118395312 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3812db55ff61d92fc4136f608ac8a91e8842885d7e907742fa2b3f55bdfe373-merged.mount: Deactivated successfully.
Dec  3 02:37:55 compute-0 podman[485016]: 2025-12-03 02:37:55.630741618 +0000 UTC m=+0.153361069 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 02:37:55 compute-0 podman[485002]: 2025-12-03 02:37:55.636582912 +0000 UTC m=+0.312633533 container remove e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:37:55 compute-0 podman[485022]: 2025-12-03 02:37:55.637434236 +0000 UTC m=+0.141008680 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:37:55 compute-0 systemd[1]: libpod-conmon-e9e422c813eb668882266aefe4605fd8fff328dc02296364fdb36bd63d90235e.scope: Deactivated successfully.
Dec  3 02:37:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:55 compute-0 podman[485141]: 2025-12-03 02:37:55.868719323 +0000 UTC m=+0.082040256 container create 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 02:37:55 compute-0 podman[485141]: 2025-12-03 02:37:55.83601117 +0000 UTC m=+0.049332153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:37:55 compute-0 systemd[1]: Started libpod-conmon-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope.
Dec  3 02:37:55 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:37:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:56 compute-0 podman[485141]: 2025-12-03 02:37:56.023413919 +0000 UTC m=+0.236734832 container init 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 02:37:56 compute-0 podman[485141]: 2025-12-03 02:37:56.042895858 +0000 UTC m=+0.256216791 container start 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 02:37:56 compute-0 podman[485141]: 2025-12-03 02:37:56.049070573 +0000 UTC m=+0.262391476 container attach 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:37:57 compute-0 nova_compute[351485]: 2025-12-03 02:37:57.050 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:57 compute-0 jolly_solomon[485157]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:37:57 compute-0 jolly_solomon[485157]: --> relative data size: 1.0
Dec  3 02:37:57 compute-0 jolly_solomon[485157]: --> All data devices are unavailable
Dec  3 02:37:57 compute-0 systemd[1]: libpod-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope: Deactivated successfully.
Dec  3 02:37:57 compute-0 podman[485141]: 2025-12-03 02:37:57.405994974 +0000 UTC m=+1.619315897 container died 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:37:57 compute-0 systemd[1]: libpod-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope: Consumed 1.287s CPU time.
Dec  3 02:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ad47152cd5e81c166997806f093450a7d6448cf0fe05278934250a7bc990125-merged.mount: Deactivated successfully.
Dec  3 02:37:57 compute-0 podman[485141]: 2025-12-03 02:37:57.537819244 +0000 UTC m=+1.751140147 container remove 63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:37:57 compute-0 systemd[1]: libpod-conmon-63d772f2091c565ca9f40cdbebd84a3bef46b6782a50d94526d671050f21ba7a.scope: Deactivated successfully.
Dec  3 02:37:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:37:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:37:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.748890141 +0000 UTC m=+0.084203257 container create 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.710353323 +0000 UTC m=+0.045666499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:37:58 compute-0 systemd[1]: Started libpod-conmon-83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272.scope.
Dec  3 02:37:58 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.901324622 +0000 UTC m=+0.236637768 container init 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.918749214 +0000 UTC m=+0.254062320 container start 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.925973068 +0000 UTC m=+0.261286244 container attach 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:37:58 compute-0 blissful_knuth[485352]: 167 167
Dec  3 02:37:58 compute-0 systemd[1]: libpod-83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272.scope: Deactivated successfully.
Dec  3 02:37:58 compute-0 podman[485336]: 2025-12-03 02:37:58.93029552 +0000 UTC m=+0.265608636 container died 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3862847ff6319251537fc2df48583f6d6ac4c9b912680f740ad06cf878f35675-merged.mount: Deactivated successfully.
Dec  3 02:37:59 compute-0 podman[485336]: 2025-12-03 02:37:59.006886881 +0000 UTC m=+0.342199987 container remove 83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:37:59 compute-0 systemd[1]: libpod-conmon-83baa69d07d928a8425762f8d36e6ea7a87db0b08047c198e441c9a8b9e7a272.scope: Deactivated successfully.
Dec  3 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.26800158 +0000 UTC m=+0.087763768 container create 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.229647828 +0000 UTC m=+0.049410056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:37:59 compute-0 systemd[1]: Started libpod-conmon-2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8.scope.
Dec  3 02:37:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:37:59 compute-0 nova_compute[351485]: 2025-12-03 02:37:59.460 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.46537284 +0000 UTC m=+0.285135008 container init 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.485983141 +0000 UTC m=+0.305745289 container start 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:37:59 compute-0 podman[485375]: 2025-12-03 02:37:59.495048367 +0000 UTC m=+0.314810515 container attach 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:37:59.675 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:37:59.676 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:37:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:37:59.676 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:37:59 compute-0 podman[158098]: time="2025-12-03T02:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44146 "" "Go-http-client/1.1"
Dec  3 02:37:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8624 "" "Go-http-client/1.1"
Dec  3 02:37:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:00 compute-0 relaxed_bell[485390]: {
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:    "0": [
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:        {
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "devices": [
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "/dev/loop3"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            ],
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_name": "ceph_lv0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_size": "21470642176",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "name": "ceph_lv0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "tags": {
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cluster_name": "ceph",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.crush_device_class": "",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.encrypted": "0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osd_id": "0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.type": "block",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.vdo": "0"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            },
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "type": "block",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "vg_name": "ceph_vg0"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:        }
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:    ],
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:    "1": [
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:        {
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "devices": [
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "/dev/loop4"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            ],
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_name": "ceph_lv1",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_size": "21470642176",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "name": "ceph_lv1",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "tags": {
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cluster_name": "ceph",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.crush_device_class": "",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.encrypted": "0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osd_id": "1",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.type": "block",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.vdo": "0"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            },
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "type": "block",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "vg_name": "ceph_vg1"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:        }
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:    ],
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:    "2": [
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:        {
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "devices": [
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "/dev/loop5"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            ],
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_name": "ceph_lv2",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_size": "21470642176",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "name": "ceph_lv2",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "tags": {
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.cluster_name": "ceph",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.crush_device_class": "",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.encrypted": "0",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osd_id": "2",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.type": "block",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:                "ceph.vdo": "0"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            },
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "type": "block",
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:            "vg_name": "ceph_vg2"
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:        }
Dec  3 02:38:00 compute-0 relaxed_bell[485390]:    ]
Dec  3 02:38:00 compute-0 relaxed_bell[485390]: }
Dec  3 02:38:00 compute-0 systemd[1]: libpod-2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8.scope: Deactivated successfully.
Dec  3 02:38:00 compute-0 podman[485400]: 2025-12-03 02:38:00.428136759 +0000 UTC m=+0.050007492 container died 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-db82ac831d0183c2956d2b8562b20617994cda2a215ed0533d12d16cb3674960-merged.mount: Deactivated successfully.
Dec  3 02:38:00 compute-0 podman[485400]: 2025-12-03 02:38:00.546494639 +0000 UTC m=+0.168365312 container remove 2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:38:00 compute-0 systemd[1]: libpod-conmon-2ed8234cb4caa9a6187f148aff36af283757c5b57d14e5acaaaca717987a48c8.scope: Deactivated successfully.
Dec  3 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:38:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:38:01 compute-0 openstack_network_exporter[368278]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:38:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.677162546 +0000 UTC m=+0.107505125 container create bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  3 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.634132881 +0000 UTC m=+0.064475510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:38:01 compute-0 systemd[1]: Started libpod-conmon-bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b.scope.
Dec  3 02:38:01 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.822086215 +0000 UTC m=+0.252428854 container init bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  3 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.834317271 +0000 UTC m=+0.264659860 container start bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.840739952 +0000 UTC m=+0.271082541 container attach bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec  3 02:38:01 compute-0 nostalgic_booth[485570]: 167 167
Dec  3 02:38:01 compute-0 systemd[1]: libpod-bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b.scope: Deactivated successfully.
Dec  3 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.843343155 +0000 UTC m=+0.273685744 container died bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:38:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f1533dd154b974d6224bcf91e0849b0809aa8a1b7452aee4c9c92aff92723c3-merged.mount: Deactivated successfully.
Dec  3 02:38:01 compute-0 podman[485554]: 2025-12-03 02:38:01.921792519 +0000 UTC m=+0.352135078 container remove bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:38:01 compute-0 systemd[1]: libpod-conmon-bb0da5b4ff6a5764dba3449af7ae42de28e0dbbb8d8eaac2816dfe270ce3d79b.scope: Deactivated successfully.
Dec  3 02:38:02 compute-0 nova_compute[351485]: 2025-12-03 02:38:02.054 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.227482296 +0000 UTC m=+0.097863303 container create a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.18900013 +0000 UTC m=+0.059381197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:38:02 compute-0 systemd[1]: Started libpod-conmon-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope.
Dec  3 02:38:02 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:38:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.427016396 +0000 UTC m=+0.297397473 container init a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.451180068 +0000 UTC m=+0.321561085 container start a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  3 02:38:02 compute-0 podman[485592]: 2025-12-03 02:38:02.457822036 +0000 UTC m=+0.328203093 container attach a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]: {
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "osd_id": 2,
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "type": "bluestore"
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:    },
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "osd_id": 1,
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "type": "bluestore"
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:    },
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "osd_id": 0,
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:        "type": "bluestore"
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]:    }
Dec  3 02:38:03 compute-0 vigilant_neumann[485608]: }
Dec  3 02:38:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:03 compute-0 systemd[1]: libpod-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope: Deactivated successfully.
Dec  3 02:38:03 compute-0 systemd[1]: libpod-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope: Consumed 1.334s CPU time.
Dec  3 02:38:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:03 compute-0 podman[485642]: 2025-12-03 02:38:03.869129113 +0000 UTC m=+0.061070865 container died a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  3 02:38:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4f2cd1d89a2251c7a9e77cf414b9fd5fc2dee1bd2ac4bf481f3444b6885d21a-merged.mount: Deactivated successfully.
Dec  3 02:38:03 compute-0 podman[485642]: 2025-12-03 02:38:03.94487439 +0000 UTC m=+0.136816052 container remove a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:38:03 compute-0 systemd[1]: libpod-conmon-a03731a3014b993da5e06114e1258d414f85e56aff234d48cdd1fa1c0cdb596f.scope: Deactivated successfully.
Dec  3 02:38:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:38:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:38:04 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:38:04 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:38:04 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 99019756-4799-4978-9c19-389a75ca91a2 does not exist
Dec  3 02:38:04 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 9f9a1426-0e16-4a14-a05e-a86dd7092da5 does not exist
Dec  3 02:38:04 compute-0 nova_compute[351485]: 2025-12-03 02:38:04.465 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:38:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:38:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:07 compute-0 nova_compute[351485]: 2025-12-03 02:38:07.060 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:09 compute-0 nova_compute[351485]: 2025-12-03 02:38:09.467 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:12 compute-0 nova_compute[351485]: 2025-12-03 02:38:12.063 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:12 compute-0 podman[485709]: 2025-12-03 02:38:12.879015759 +0000 UTC m=+0.109094220 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:38:12 compute-0 podman[485708]: 2025-12-03 02:38:12.897158341 +0000 UTC m=+0.128650102 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec  3 02:38:12 compute-0 podman[485707]: 2025-12-03 02:38:12.929858283 +0000 UTC m=+0.164963306 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:38:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:14 compute-0 nova_compute[351485]: 2025-12-03 02:38:14.470 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:17 compute-0 nova_compute[351485]: 2025-12-03 02:38:17.066 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:18 compute-0 nova_compute[351485]: 2025-12-03 02:38:18.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:19 compute-0 nova_compute[351485]: 2025-12-03 02:38:19.475 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:21 compute-0 podman[485764]: 2025-12-03 02:38:21.886218678 +0000 UTC m=+0.134073935 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:38:22 compute-0 nova_compute[351485]: 2025-12-03 02:38:22.070 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:24 compute-0 nova_compute[351485]: 2025-12-03 02:38:24.476 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:25 compute-0 podman[485786]: 2025-12-03 02:38:25.856424335 +0000 UTC m=+0.095324391 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 02:38:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:25 compute-0 podman[485783]: 2025-12-03 02:38:25.866956092 +0000 UTC m=+0.111288692 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 02:38:25 compute-0 podman[485784]: 2025-12-03 02:38:25.889337854 +0000 UTC m=+0.127527460 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:38:25 compute-0 podman[485785]: 2025-12-03 02:38:25.90018686 +0000 UTC m=+0.133906350 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, architecture=x86_64, release=1214.1726694543, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=)
Dec  3 02:38:25 compute-0 podman[485782]: 2025-12-03 02:38:25.920588045 +0000 UTC m=+0.168594238 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  3 02:38:26 compute-0 nova_compute[351485]: 2025-12-03 02:38:26.591 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:27 compute-0 nova_compute[351485]: 2025-12-03 02:38:27.073 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:38:28
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'volumes', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Dec  3 02:38:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:38:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:38:29 compute-0 nova_compute[351485]: 2025-12-03 02:38:29.478 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:29 compute-0 podman[158098]: time="2025-12-03T02:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:38:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8214 "" "Go-http-client/1.1"
Dec  3 02:38:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:38:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:38:31 compute-0 openstack_network_exporter[368278]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:38:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:38:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:32 compute-0 nova_compute[351485]: 2025-12-03 02:38:32.076 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:34 compute-0 nova_compute[351485]: 2025-12-03 02:38:34.482 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:37 compute-0 nova_compute[351485]: 2025-12-03 02:38:37.080 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:38:38 compute-0 nova_compute[351485]: 2025-12-03 02:38:38.614 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:38:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.486 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.612 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.612 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:38:39 compute-0 nova_compute[351485]: 2025-12-03 02:38:39.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:38:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:38:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367587672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.135 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.712 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.713 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3970MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.714 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.714 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.957 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.958 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:38:40 compute-0 nova_compute[351485]: 2025-12-03 02:38:40.983 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:38:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:38:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448743769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.488 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.499 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.632 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.634 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:38:41 compute-0 nova_compute[351485]: 2025-12-03 02:38:41.635 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:38:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:42 compute-0 nova_compute[351485]: 2025-12-03 02:38:42.083 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:42 compute-0 nova_compute[351485]: 2025-12-03 02:38:42.637 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:43 compute-0 nova_compute[351485]: 2025-12-03 02:38:43.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:43 compute-0 podman[485929]: 2025-12-03 02:38:43.890489061 +0000 UTC m=+0.124137664 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:38:43 compute-0 podman[485928]: 2025-12-03 02:38:43.903011284 +0000 UTC m=+0.144377535 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:38:43 compute-0 podman[485927]: 2025-12-03 02:38:43.907296115 +0000 UTC m=+0.154969044 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.488 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:44 compute-0 nova_compute[351485]: 2025-12-03 02:38:44.600 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:38:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301271762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:38:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:38:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301271762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:38:47 compute-0 nova_compute[351485]: 2025-12-03 02:38:47.087 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:48 compute-0 nova_compute[351485]: 2025-12-03 02:38:48.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:48 compute-0 nova_compute[351485]: 2025-12-03 02:38:48.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:38:48 compute-0 nova_compute[351485]: 2025-12-03 02:38:48.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:38:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:49 compute-0 nova_compute[351485]: 2025-12-03 02:38:49.492 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:52 compute-0 nova_compute[351485]: 2025-12-03 02:38:52.090 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:52 compute-0 podman[485984]: 2025-12-03 02:38:52.902231288 +0000 UTC m=+0.137778059 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:38:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:54 compute-0 nova_compute[351485]: 2025-12-03 02:38:54.494 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:56 compute-0 podman[486008]: 2025-12-03 02:38:56.876158752 +0000 UTC m=+0.114325728 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:38:56 compute-0 podman[486007]: 2025-12-03 02:38:56.878675113 +0000 UTC m=+0.120390299 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, distribution-scope=public)
Dec  3 02:38:56 compute-0 podman[486009]: 2025-12-03 02:38:56.887264855 +0000 UTC m=+0.113120833 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, release-0.7.12=, container_name=kepler)
Dec  3 02:38:56 compute-0 podman[486015]: 2025-12-03 02:38:56.89772763 +0000 UTC m=+0.122760025 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 02:38:56 compute-0 podman[486006]: 2025-12-03 02:38:56.906500178 +0000 UTC m=+0.156984081 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:38:57 compute-0 nova_compute[351485]: 2025-12-03 02:38:57.094 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:38:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:38:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:38:59 compute-0 nova_compute[351485]: 2025-12-03 02:38:59.497 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:38:59.676 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:38:59.677 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:38:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:38:59.677 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:38:59 compute-0 podman[158098]: time="2025-12-03T02:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:38:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8212 "" "Go-http-client/1.1"
Dec  3 02:38:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:39:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:39:01 compute-0 openstack_network_exporter[368278]: ERROR   02:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:39:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:39:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:02 compute-0 nova_compute[351485]: 2025-12-03 02:39:02.097 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.772200) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543772241, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1324, "num_deletes": 255, "total_data_size": 2064886, "memory_usage": 2095600, "flush_reason": "Manual Compaction"}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543790097, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 2023243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51019, "largest_seqno": 52342, "table_properties": {"data_size": 2016983, "index_size": 3527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12805, "raw_average_key_size": 19, "raw_value_size": 2004467, "raw_average_value_size": 3041, "num_data_blocks": 159, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729406, "oldest_key_time": 1764729406, "file_creation_time": 1764729543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 18003 microseconds, and 9442 cpu microseconds.
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.790199) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 2023243 bytes OK
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.790224) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.793179) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.793201) EVENT_LOG_v1 {"time_micros": 1764729543793194, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.793225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2058963, prev total WAL file size 2058963, number of live WAL files 2.
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.795264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303038' seq:72057594037927935, type:22 .. '6C6F676D0032323539' seq:0, type:0; will stop at (end)
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1975KB)], [122(7543KB)]
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543795387, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 9748062, "oldest_snapshot_seqno": -1}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6658 keys, 9637348 bytes, temperature: kUnknown
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543867354, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9637348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9594237, "index_size": 25335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174228, "raw_average_key_size": 26, "raw_value_size": 9475258, "raw_average_value_size": 1423, "num_data_blocks": 1010, "num_entries": 6658, "num_filter_entries": 6658, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.867742) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9637348 bytes
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.870997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.3 rd, 133.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 7180, records dropped: 522 output_compression: NoCompression
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.871085) EVENT_LOG_v1 {"time_micros": 1764729543871053, "job": 74, "event": "compaction_finished", "compaction_time_micros": 72047, "compaction_time_cpu_micros": 44381, "output_level": 6, "num_output_files": 1, "total_output_size": 9637348, "num_input_records": 7180, "num_output_records": 6658, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543872364, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729543875790, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.794601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:39:03 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:39:03.876125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:39:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:04 compute-0 nova_compute[351485]: 2025-12-03 02:39:04.501 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:39:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1068e014-027a-46b3-97a9-003b23d09828 does not exist
Dec  3 02:39:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d5cb69f4-3afb-4743-8b28-4e09d521121c does not exist
Dec  3 02:39:05 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 71c6b075-e242-4f68-9d2a-bb71924a90cc does not exist
Dec  3 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:39:05 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:39:05 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:39:05 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:39:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:06 compute-0 podman[486373]: 2025-12-03 02:39:06.833697949 +0000 UTC m=+0.099498269 container create 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:39:06 compute-0 podman[486373]: 2025-12-03 02:39:06.791037035 +0000 UTC m=+0.056837415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:39:06 compute-0 systemd[1]: Started libpod-conmon-2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b.scope.
Dec  3 02:39:06 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:39:06 compute-0 podman[486373]: 2025-12-03 02:39:06.994341682 +0000 UTC m=+0.260142082 container init 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  3 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.01338896 +0000 UTC m=+0.279189290 container start 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  3 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.022578399 +0000 UTC m=+0.288378789 container attach 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:39:07 compute-0 vigorous_darwin[486389]: 167 167
Dec  3 02:39:07 compute-0 systemd[1]: libpod-2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b.scope: Deactivated successfully.
Dec  3 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.028708122 +0000 UTC m=+0.294508452 container died 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  3 02:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab1ae4b645bf730e83cf8cb8e716eec75ab2f0cdcacce044cb234e309038c88-merged.mount: Deactivated successfully.
Dec  3 02:39:07 compute-0 nova_compute[351485]: 2025-12-03 02:39:07.100 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:07 compute-0 podman[486373]: 2025-12-03 02:39:07.112371093 +0000 UTC m=+0.378171413 container remove 2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:39:07 compute-0 systemd[1]: libpod-conmon-2bbf9695927ad244a5bfcbf115f8d5f60b4c1984aa057dd05dd5b00c70e8252b.scope: Deactivated successfully.
Dec  3 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.401749879 +0000 UTC m=+0.098375257 container create 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  3 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.374171201 +0000 UTC m=+0.070796569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:39:07 compute-0 systemd[1]: Started libpod-conmon-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope.
Dec  3 02:39:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.610339626 +0000 UTC m=+0.306965034 container init 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.642060771 +0000 UTC m=+0.338686139 container start 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:39:07 compute-0 podman[486413]: 2025-12-03 02:39:07.64876488 +0000 UTC m=+0.345390318 container attach 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:39:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:08 compute-0 brave_northcutt[486429]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:39:08 compute-0 brave_northcutt[486429]: --> relative data size: 1.0
Dec  3 02:39:08 compute-0 brave_northcutt[486429]: --> All data devices are unavailable
Dec  3 02:39:08 compute-0 systemd[1]: libpod-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope: Deactivated successfully.
Dec  3 02:39:08 compute-0 systemd[1]: libpod-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope: Consumed 1.257s CPU time.
Dec  3 02:39:08 compute-0 podman[486413]: 2025-12-03 02:39:08.959881849 +0000 UTC m=+1.656507247 container died 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec  3 02:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed8cf433edf50338f86b4d96be8b89de5c7b5331b19ca8f2d7d24b2c82891cc0-merged.mount: Deactivated successfully.
Dec  3 02:39:09 compute-0 podman[486413]: 2025-12-03 02:39:09.0524148 +0000 UTC m=+1.749040148 container remove 3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:39:09 compute-0 systemd[1]: libpod-conmon-3f20d47e9a17171924034a75c26fa6286aa127594f3bd6a43e1c580f17d22428.scope: Deactivated successfully.
Dec  3 02:39:09 compute-0 nova_compute[351485]: 2025-12-03 02:39:09.504 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.236220567 +0000 UTC m=+0.083244300 container create b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.200000755 +0000 UTC m=+0.047024558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:39:10 compute-0 systemd[1]: Started libpod-conmon-b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95.scope.
Dec  3 02:39:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.365057543 +0000 UTC m=+0.212081356 container init b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.389836962 +0000 UTC m=+0.236860725 container start b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.397332794 +0000 UTC m=+0.244356577 container attach b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:39:10 compute-0 gallant_zhukovsky[486623]: 167 167
Dec  3 02:39:10 compute-0 systemd[1]: libpod-b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95.scope: Deactivated successfully.
Dec  3 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.405019861 +0000 UTC m=+0.252043624 container died b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9155b2732b910583c2b4f24c741b1eff1edd973582cc44d12eeff212bc9b90-merged.mount: Deactivated successfully.
Dec  3 02:39:10 compute-0 podman[486607]: 2025-12-03 02:39:10.49394138 +0000 UTC m=+0.340965133 container remove b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_zhukovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:39:10 compute-0 systemd[1]: libpod-conmon-b76f8f83ca977c6713442ffe78d94bd04586308d28b0b829093fb61574621b95.scope: Deactivated successfully.
Dec  3 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.748037681 +0000 UTC m=+0.072726004 container create 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:39:10 compute-0 systemd[1]: Started libpod-conmon-0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672.scope.
Dec  3 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.714730651 +0000 UTC m=+0.039419024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:39:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.901966444 +0000 UTC m=+0.226654817 container init 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.919431367 +0000 UTC m=+0.244119700 container start 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:39:10 compute-0 podman[486647]: 2025-12-03 02:39:10.924604853 +0000 UTC m=+0.249293146 container attach 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]: {
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:    "0": [
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:        {
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "devices": [
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "/dev/loop3"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            ],
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_name": "ceph_lv0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_size": "21470642176",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "name": "ceph_lv0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "tags": {
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cluster_name": "ceph",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.crush_device_class": "",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.encrypted": "0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osd_id": "0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.type": "block",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.vdo": "0"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            },
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "type": "block",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "vg_name": "ceph_vg0"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:        }
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:    ],
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:    "1": [
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:        {
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "devices": [
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "/dev/loop4"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            ],
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_name": "ceph_lv1",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_size": "21470642176",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "name": "ceph_lv1",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "tags": {
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cluster_name": "ceph",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.crush_device_class": "",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.encrypted": "0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osd_id": "1",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.type": "block",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.vdo": "0"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            },
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "type": "block",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "vg_name": "ceph_vg1"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:        }
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:    ],
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:    "2": [
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:        {
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "devices": [
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "/dev/loop5"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            ],
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_name": "ceph_lv2",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_size": "21470642176",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "name": "ceph_lv2",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "tags": {
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.cluster_name": "ceph",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.crush_device_class": "",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.encrypted": "0",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osd_id": "2",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.type": "block",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:                "ceph.vdo": "0"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            },
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "type": "block",
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:            "vg_name": "ceph_vg2"
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:        }
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]:    ]
Dec  3 02:39:11 compute-0 heuristic_einstein[486664]: }
Dec  3 02:39:11 compute-0 systemd[1]: libpod-0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672.scope: Deactivated successfully.
Dec  3 02:39:11 compute-0 podman[486647]: 2025-12-03 02:39:11.786414174 +0000 UTC m=+1.111102577 container died 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 02:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc3977b87c1bb7ec69921ca272668e66335123bef72c909acc208b021c849b6-merged.mount: Deactivated successfully.
Dec  3 02:39:11 compute-0 podman[486647]: 2025-12-03 02:39:11.871720011 +0000 UTC m=+1.196408304 container remove 0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 02:39:11 compute-0 systemd[1]: libpod-conmon-0f834ba94049d8408491cc75feb8844e958efe15df6b46dc30b1091a61534672.scope: Deactivated successfully.
Dec  3 02:39:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:12 compute-0 nova_compute[351485]: 2025-12-03 02:39:12.110 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.12084772 +0000 UTC m=+0.084612429 container create 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.087368425 +0000 UTC m=+0.051133184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:39:13 compute-0 systemd[1]: Started libpod-conmon-651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68.scope.
Dec  3 02:39:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.278957282 +0000 UTC m=+0.242722031 container init 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.295104628 +0000 UTC m=+0.258869337 container start 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.302594719 +0000 UTC m=+0.266359468 container attach 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:39:13 compute-0 goofy_turing[486841]: 167 167
Dec  3 02:39:13 compute-0 systemd[1]: libpod-651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68.scope: Deactivated successfully.
Dec  3 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.309178105 +0000 UTC m=+0.272942804 container died 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  3 02:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-04b188f955d95adbb15f645a77c0365d0105590f531936f94384e84abf746ef8-merged.mount: Deactivated successfully.
Dec  3 02:39:13 compute-0 podman[486825]: 2025-12-03 02:39:13.388648197 +0000 UTC m=+0.352412876 container remove 651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:39:13 compute-0 systemd[1]: libpod-conmon-651afdc5074b3438dfee26e3ae90c1f59830466909598bc873d43c2d8361ec68.scope: Deactivated successfully.
Dec  3 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.67400462 +0000 UTC m=+0.106734703 container create 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.636976765 +0000 UTC m=+0.069706908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:39:13 compute-0 systemd[1]: Started libpod-conmon-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope.
Dec  3 02:39:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:13 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.849793801 +0000 UTC m=+0.282523944 container init 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.881732102 +0000 UTC m=+0.314462195 container start 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:39:13 compute-0 podman[486864]: 2025-12-03 02:39:13.888503713 +0000 UTC m=+0.321233806 container attach 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:39:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:14 compute-0 nova_compute[351485]: 2025-12-03 02:39:14.508 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:14 compute-0 podman[486893]: 2025-12-03 02:39:14.853192677 +0000 UTC m=+0.103748219 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:39:14 compute-0 podman[486898]: 2025-12-03 02:39:14.882496454 +0000 UTC m=+0.125051550 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:39:14 compute-0 podman[486897]: 2025-12-03 02:39:14.889111051 +0000 UTC m=+0.139268252 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Dec  3 02:39:14 compute-0 musing_mestorf[486880]: {
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "osd_id": 2,
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "type": "bluestore"
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:    },
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "osd_id": 1,
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "type": "bluestore"
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:    },
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "osd_id": 0,
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:        "type": "bluestore"
Dec  3 02:39:14 compute-0 musing_mestorf[486880]:    }
Dec  3 02:39:14 compute-0 musing_mestorf[486880]: }
Dec  3 02:39:15 compute-0 systemd[1]: libpod-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope: Deactivated successfully.
Dec  3 02:39:15 compute-0 systemd[1]: libpod-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope: Consumed 1.122s CPU time.
Dec  3 02:39:15 compute-0 podman[486864]: 2025-12-03 02:39:15.024323266 +0000 UTC m=+1.457053349 container died 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-62fd25ab72b803e717c606ba82645e5f628e2062cfdf62a46b3d4f0289d5c847-merged.mount: Deactivated successfully.
Dec  3 02:39:15 compute-0 podman[486864]: 2025-12-03 02:39:15.118849114 +0000 UTC m=+1.551579177 container remove 1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 02:39:15 compute-0 systemd[1]: libpod-conmon-1da437410b453ac541f34a9d108bdf469be894628f80c977861a3f2229104b90.scope: Deactivated successfully.
Dec  3 02:39:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:39:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:39:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:39:15 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:39:15 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 846c9050-a407-4804-aadb-9840dc14425f does not exist
Dec  3 02:39:15 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 69b301ab-2fee-463d-970f-bd5ebc663ab9 does not exist
Dec  3 02:39:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:39:16 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:39:17 compute-0 nova_compute[351485]: 2025-12-03 02:39:17.113 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:19 compute-0 nova_compute[351485]: 2025-12-03 02:39:19.511 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.518 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.519 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.533 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:39:19.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:39:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:22 compute-0 nova_compute[351485]: 2025-12-03 02:39:22.116 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:22 compute-0 nova_compute[351485]: 2025-12-03 02:39:22.838 351492 DEBUG oslo_concurrency.processutils [None req-445974f2-4675-4ece-9116-f5f717039c73 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:39:22 compute-0 nova_compute[351485]: 2025-12-03 02:39:22.892 351492 DEBUG oslo_concurrency.processutils [None req-445974f2-4675-4ece-9116-f5f717039c73 03ba25e4009b43f7b0054fee32bf9136 9746b242761a48048d185ce26d622b33 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:39:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:23 compute-0 podman[487033]: 2025-12-03 02:39:23.900463479 +0000 UTC m=+0.141997658 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  3 02:39:24 compute-0 nova_compute[351485]: 2025-12-03 02:39:24.514 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:27 compute-0 nova_compute[351485]: 2025-12-03 02:39:27.119 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:27 compute-0 podman[487056]: 2025-12-03 02:39:27.892944695 +0000 UTC m=+0.124705400 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  3 02:39:27 compute-0 podman[487057]: 2025-12-03 02:39:27.892281476 +0000 UTC m=+0.121064027 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, io.openshift.expose-services=, config_id=edpm, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 02:39:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:27 compute-0 podman[487068]: 2025-12-03 02:39:27.904131071 +0000 UTC m=+0.125904094 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  3 02:39:27 compute-0 podman[487055]: 2025-12-03 02:39:27.921142671 +0000 UTC m=+0.168335222 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:39:27 compute-0 podman[487054]: 2025-12-03 02:39:27.957713603 +0000 UTC m=+0.206903280 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:39:28 compute-0 nova_compute[351485]: 2025-12-03 02:39:28.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:39:28
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'backups', '.mgr', 'images', 'default.rgw.log']
Dec  3 02:39:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:39:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:39:29 compute-0 nova_compute[351485]: 2025-12-03 02:39:29.517 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:29 compute-0 podman[158098]: time="2025-12-03T02:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:39:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8213 "" "Go-http-client/1.1"
Dec  3 02:39:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:30 compute-0 nova_compute[351485]: 2025-12-03 02:39:30.986 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:30.987 288528 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1a:a6:85', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:2a:11:ae:7b:8c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 02:39:30 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:30.989 288528 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:39:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:39:31 compute-0 openstack_network_exporter[368278]: ERROR   02:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:39:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:39:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:32 compute-0 nova_compute[351485]: 2025-12-03 02:39:32.122 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:34 compute-0 nova_compute[351485]: 2025-12-03 02:39:34.520 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:36 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:36.992 288528 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=eda9fd7d-f2b1-4121-b9ac-fc31f8426272, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 02:39:37 compute-0 nova_compute[351485]: 2025-12-03 02:39:37.124 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.525 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.632 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.633 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.682 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.683 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.684 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.685 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:39:39 compute-0 nova_compute[351485]: 2025-12-03 02:39:39.686 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:39:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:39:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600621679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.214 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.846 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.848 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3942MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.849 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.935 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:39:40 compute-0 nova_compute[351485]: 2025-12-03 02:39:40.936 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.015 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:39:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:39:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2875702493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.492 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.499 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.517 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.518 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:39:41 compute-0 nova_compute[351485]: 2025-12-03 02:39:41.518 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:39:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:42 compute-0 nova_compute[351485]: 2025-12-03 02:39:42.128 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:39:42 compute-0 ceph-osd[206633]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 3012 syncs, 3.57 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 423 writes, 1287 keys, 423 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 423 writes, 202 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:39:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:44 compute-0 nova_compute[351485]: 2025-12-03 02:39:44.463 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:44 compute-0 nova_compute[351485]: 2025-12-03 02:39:44.527 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:44 compute-0 nova_compute[351485]: 2025-12-03 02:39:44.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:45 compute-0 nova_compute[351485]: 2025-12-03 02:39:45.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:45 compute-0 podman[487200]: 2025-12-03 02:39:45.856140258 +0000 UTC m=+0.103191663 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:39:45 compute-0 podman[487198]: 2025-12-03 02:39:45.860951833 +0000 UTC m=+0.110586831 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:39:45 compute-0 podman[487199]: 2025-12-03 02:39:45.882364338 +0000 UTC m=+0.127561071 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  3 02:39:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:46 compute-0 nova_compute[351485]: 2025-12-03 02:39:46.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:39:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1485462913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:39:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:39:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1485462913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:39:47 compute-0 nova_compute[351485]: 2025-12-03 02:39:47.133 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:39:48 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3504 syncs, 3.50 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 495 writes, 1261 keys, 495 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s#012Interval WAL: 495 writes, 232 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:39:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:49 compute-0 nova_compute[351485]: 2025-12-03 02:39:49.529 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:49 compute-0 nova_compute[351485]: 2025-12-03 02:39:49.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:49 compute-0 nova_compute[351485]: 2025-12-03 02:39:49.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:39:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:50 compute-0 nova_compute[351485]: 2025-12-03 02:39:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:39:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:52 compute-0 nova_compute[351485]: 2025-12-03 02:39:52.137 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:54 compute-0 nova_compute[351485]: 2025-12-03 02:39:54.532 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:54 compute-0 podman[487258]: 2025-12-03 02:39:54.886972082 +0000 UTC m=+0.136787111 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  3 02:39:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:39:55 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 9641 writes, 36K keys, 9641 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9641 writes, 2604 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 416 writes, 951 keys, 416 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 416 writes, 194 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:39:57 compute-0 ceph-mgr[193109]: [devicehealth INFO root] Check health
Dec  3 02:39:57 compute-0 nova_compute[351485]: 2025-12-03 02:39:57.140 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:39:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:39:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:39:58 compute-0 podman[487278]: 2025-12-03 02:39:58.879778037 +0000 UTC m=+0.115302605 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Dec  3 02:39:58 compute-0 podman[487287]: 2025-12-03 02:39:58.882144394 +0000 UTC m=+0.094398415 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:39:58 compute-0 podman[487279]: 2025-12-03 02:39:58.900711588 +0000 UTC m=+0.131890683 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:39:58 compute-0 podman[487285]: 2025-12-03 02:39:58.909720812 +0000 UTC m=+0.123018373 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  3 02:39:58 compute-0 podman[487277]: 2025-12-03 02:39:58.92948728 +0000 UTC m=+0.183341665 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 02:39:59 compute-0 nova_compute[351485]: 2025-12-03 02:39:59.534 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:59.677 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:59.678 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:39:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:39:59.678 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:39:59 compute-0 podman[158098]: time="2025-12-03T02:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:39:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8214 "" "Go-http-client/1.1"
Dec  3 02:39:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:40:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:40:01 compute-0 openstack_network_exporter[368278]: ERROR   02:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:40:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:40:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:02 compute-0 nova_compute[351485]: 2025-12-03 02:40:02.144 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:04 compute-0 nova_compute[351485]: 2025-12-03 02:40:04.538 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:07 compute-0 nova_compute[351485]: 2025-12-03 02:40:07.147 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:09 compute-0 nova_compute[351485]: 2025-12-03 02:40:09.542 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:12 compute-0 nova_compute[351485]: 2025-12-03 02:40:12.150 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:14 compute-0 nova_compute[351485]: 2025-12-03 02:40:14.546 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:16 compute-0 podman[487458]: 2025-12-03 02:40:16.060059665 +0000 UTC m=+0.099019555 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  3 02:40:16 compute-0 podman[487456]: 2025-12-03 02:40:16.080235065 +0000 UTC m=+0.128967831 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:40:16 compute-0 podman[487457]: 2025-12-03 02:40:16.086908853 +0000 UTC m=+0.121644554 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  3 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:40:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev f3df4bec-171c-4134-b92d-b8c299e84bbb does not exist
Dec  3 02:40:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 119854fe-9f00-431a-9d53-54d219afad90 does not exist
Dec  3 02:40:16 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev eb904554-eec3-43b1-ba60-776c6e2bbcb2 does not exist
Dec  3 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:40:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:40:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:40:17 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:40:17 compute-0 nova_compute[351485]: 2025-12-03 02:40:17.153 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:17 compute-0 podman[487708]: 2025-12-03 02:40:17.987133156 +0000 UTC m=+0.093103348 container create 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:17.94970553 +0000 UTC m=+0.055675772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:40:18 compute-0 systemd[1]: Started libpod-conmon-90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b.scope.
Dec  3 02:40:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.139211368 +0000 UTC m=+0.245181610 container init 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.157877975 +0000 UTC m=+0.263848157 container start 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.165051287 +0000 UTC m=+0.271021529 container attach 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:18 compute-0 vigorous_napier[487724]: 167 167
Dec  3 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.17365044 +0000 UTC m=+0.279620662 container died 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:40:18 compute-0 systemd[1]: libpod-90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b.scope: Deactivated successfully.
Dec  3 02:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2933bceaf615f0f0843f11cf272a144fa4abf18411b25c2a6a2b311fb815dfbd-merged.mount: Deactivated successfully.
Dec  3 02:40:18 compute-0 podman[487708]: 2025-12-03 02:40:18.270330438 +0000 UTC m=+0.376300600 container remove 90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 02:40:18 compute-0 systemd[1]: libpod-conmon-90f1d2b8a7306873d762e573e672f3bb23ac49f70c835a456eced183d07b226b.scope: Deactivated successfully.
Dec  3 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.555132365 +0000 UTC m=+0.083834756 container create bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.526281211 +0000 UTC m=+0.054983662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:40:18 compute-0 systemd[1]: Started libpod-conmon-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope.
Dec  3 02:40:18 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.708970947 +0000 UTC m=+0.237673388 container init bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.744645833 +0000 UTC m=+0.273348224 container start bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:40:18 compute-0 podman[487747]: 2025-12-03 02:40:18.750705044 +0000 UTC m=+0.279407445 container attach bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:19 compute-0 nova_compute[351485]: 2025-12-03 02:40:19.548 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:20 compute-0 nervous_lichterman[487764]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:40:20 compute-0 nervous_lichterman[487764]: --> relative data size: 1.0
Dec  3 02:40:20 compute-0 nervous_lichterman[487764]: --> All data devices are unavailable
Dec  3 02:40:20 compute-0 systemd[1]: libpod-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope: Deactivated successfully.
Dec  3 02:40:20 compute-0 podman[487747]: 2025-12-03 02:40:20.089477275 +0000 UTC m=+1.618179666 container died bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:40:20 compute-0 systemd[1]: libpod-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope: Consumed 1.282s CPU time.
Dec  3 02:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b77bc7aa3b2c7f72642bba78f02b18cfda46a6cce9c84feb09edd66c566f095-merged.mount: Deactivated successfully.
Dec  3 02:40:20 compute-0 podman[487747]: 2025-12-03 02:40:20.207485124 +0000 UTC m=+1.736187515 container remove bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lichterman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:40:20 compute-0 systemd[1]: libpod-conmon-bde6ba6d865bb4cdb7b254d75ae4ca2718ea4ca376066e33feeb6f2230a0f7c3.scope: Deactivated successfully.
Dec  3 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.437257738 +0000 UTC m=+0.103844612 container create 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.391770124 +0000 UTC m=+0.058357078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:40:21 compute-0 systemd[1]: Started libpod-conmon-1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19.scope.
Dec  3 02:40:21 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.600381881 +0000 UTC m=+0.266968795 container init 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.61874959 +0000 UTC m=+0.285336484 container start 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  3 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.625355356 +0000 UTC m=+0.291942320 container attach 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:40:21 compute-0 competent_lovelace[487960]: 167 167
Dec  3 02:40:21 compute-0 systemd[1]: libpod-1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19.scope: Deactivated successfully.
Dec  3 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.632488827 +0000 UTC m=+0.299075771 container died 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  3 02:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-01f386d7cf362774680dd90444c9e7090d1fcf18791e5a553459bdad0964f894-merged.mount: Deactivated successfully.
Dec  3 02:40:21 compute-0 podman[487944]: 2025-12-03 02:40:21.724636388 +0000 UTC m=+0.391223292 container remove 1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 02:40:21 compute-0 systemd[1]: libpod-conmon-1eace89d3b6c530f9a63e19f38675901167489d14f45d3f783e77bc66513af19.scope: Deactivated successfully.
Dec  3 02:40:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.005863024 +0000 UTC m=+0.071078137 container create b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:22 compute-0 systemd[1]: Started libpod-conmon-b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da.scope.
Dec  3 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:21.987781984 +0000 UTC m=+0.052997117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:40:22 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.13437243 +0000 UTC m=+0.199587553 container init b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.146295117 +0000 UTC m=+0.211510230 container start b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:40:22 compute-0 podman[487985]: 2025-12-03 02:40:22.150475815 +0000 UTC m=+0.215690928 container attach b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:22 compute-0 nova_compute[351485]: 2025-12-03 02:40:22.176 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:23 compute-0 dreamy_noether[488002]: {
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:    "0": [
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:        {
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "devices": [
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "/dev/loop3"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            ],
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_name": "ceph_lv0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_size": "21470642176",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "name": "ceph_lv0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "tags": {
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cluster_name": "ceph",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.crush_device_class": "",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.encrypted": "0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osd_id": "0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.type": "block",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.vdo": "0"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            },
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "type": "block",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "vg_name": "ceph_vg0"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:        }
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:    ],
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:    "1": [
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:        {
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "devices": [
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "/dev/loop4"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            ],
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_name": "ceph_lv1",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_size": "21470642176",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "name": "ceph_lv1",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "tags": {
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cluster_name": "ceph",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.crush_device_class": "",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.encrypted": "0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osd_id": "1",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.type": "block",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.vdo": "0"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            },
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "type": "block",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "vg_name": "ceph_vg1"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:        }
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:    ],
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:    "2": [
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:        {
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "devices": [
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "/dev/loop5"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            ],
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_name": "ceph_lv2",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_size": "21470642176",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "name": "ceph_lv2",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "tags": {
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.cluster_name": "ceph",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.crush_device_class": "",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.encrypted": "0",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osd_id": "2",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.type": "block",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:                "ceph.vdo": "0"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            },
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "type": "block",
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:            "vg_name": "ceph_vg2"
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:        }
Dec  3 02:40:23 compute-0 dreamy_noether[488002]:    ]
Dec  3 02:40:23 compute-0 dreamy_noether[488002]: }
Dec  3 02:40:23 compute-0 systemd[1]: libpod-b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da.scope: Deactivated successfully.
Dec  3 02:40:23 compute-0 podman[487985]: 2025-12-03 02:40:23.094887516 +0000 UTC m=+1.160102659 container died b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 02:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cda2d6f4278a5d798dff3ee990fa9334da896bad379a001774e67c9010a057c-merged.mount: Deactivated successfully.
Dec  3 02:40:23 compute-0 podman[487985]: 2025-12-03 02:40:23.208231875 +0000 UTC m=+1.273446998 container remove b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 02:40:23 compute-0 systemd[1]: libpod-conmon-b5a571adf656d919922d05a7baeea26171dcb6f9673542b51970c59f6ffc22da.scope: Deactivated successfully.
Dec  3 02:40:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.403016011 +0000 UTC m=+0.084424784 container create a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.367170269 +0000 UTC m=+0.048579102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:40:24 compute-0 systemd[1]: Started libpod-conmon-a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe.scope.
Dec  3 02:40:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:40:24 compute-0 nova_compute[351485]: 2025-12-03 02:40:24.551 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.558987262 +0000 UTC m=+0.240396085 container init a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  3 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.574431848 +0000 UTC m=+0.255840591 container start a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.579700877 +0000 UTC m=+0.261109710 container attach a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:40:24 compute-0 nostalgic_brahmagupta[488177]: 167 167
Dec  3 02:40:24 compute-0 systemd[1]: libpod-a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe.scope: Deactivated successfully.
Dec  3 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.587152947 +0000 UTC m=+0.268561730 container died a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c39d0f166615323d7de4d1ee1dc2a335ca0ce04153eadd9c62320d2ba33208ad-merged.mount: Deactivated successfully.
Dec  3 02:40:24 compute-0 podman[488160]: 2025-12-03 02:40:24.652593444 +0000 UTC m=+0.334002217 container remove a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:40:24 compute-0 systemd[1]: libpod-conmon-a9e478454e36aecaa97114cff47e189b115d16c84dccecbdbc19baed193788fe.scope: Deactivated successfully.
Dec  3 02:40:24 compute-0 podman[488199]: 2025-12-03 02:40:24.886130954 +0000 UTC m=+0.061310471 container create c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  3 02:40:24 compute-0 systemd[1]: Started libpod-conmon-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope.
Dec  3 02:40:24 compute-0 podman[488199]: 2025-12-03 02:40:24.870494583 +0000 UTC m=+0.045674120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:40:24 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:40:25 compute-0 podman[488199]: 2025-12-03 02:40:25.031892237 +0000 UTC m=+0.207071784 container init c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:40:25 compute-0 podman[488199]: 2025-12-03 02:40:25.062206633 +0000 UTC m=+0.237386190 container start c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:25 compute-0 podman[488199]: 2025-12-03 02:40:25.073055999 +0000 UTC m=+0.248235566 container attach c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  3 02:40:25 compute-0 podman[488214]: 2025-12-03 02:40:25.129399709 +0000 UTC m=+0.157420153 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:40:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:26 compute-0 funny_meitner[488215]: {
Dec  3 02:40:26 compute-0 funny_meitner[488215]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "osd_id": 2,
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "type": "bluestore"
Dec  3 02:40:26 compute-0 funny_meitner[488215]:    },
Dec  3 02:40:26 compute-0 funny_meitner[488215]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "osd_id": 1,
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "type": "bluestore"
Dec  3 02:40:26 compute-0 funny_meitner[488215]:    },
Dec  3 02:40:26 compute-0 funny_meitner[488215]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "osd_id": 0,
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:40:26 compute-0 funny_meitner[488215]:        "type": "bluestore"
Dec  3 02:40:26 compute-0 funny_meitner[488215]:    }
Dec  3 02:40:26 compute-0 funny_meitner[488215]: }
Dec  3 02:40:26 compute-0 systemd[1]: libpod-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope: Deactivated successfully.
Dec  3 02:40:26 compute-0 systemd[1]: libpod-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope: Consumed 1.217s CPU time.
Dec  3 02:40:26 compute-0 podman[488266]: 2025-12-03 02:40:26.372302014 +0000 UTC m=+0.069001128 container died c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bdcd087b6c18b9e47a07b0418124df1520d81086b846d83a36818f6bc99e6ce-merged.mount: Deactivated successfully.
Dec  3 02:40:26 compute-0 podman[488266]: 2025-12-03 02:40:26.489331506 +0000 UTC m=+0.186030580 container remove c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:40:26 compute-0 systemd[1]: libpod-conmon-c480d6506ac213d04f6337677de0e94d64718724129523647a7f4330ac8a2bb2.scope: Deactivated successfully.
Dec  3 02:40:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:40:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:40:26 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:40:26 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:40:26 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 24fa1481-e392-44a0-83ff-c6bf2975afb4 does not exist
Dec  3 02:40:26 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 3557086b-ac86-468f-ad71-5c2885d78cf4 does not exist
Dec  3 02:40:27 compute-0 nova_compute[351485]: 2025-12-03 02:40:27.179 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:40:27 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:40:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:40:28
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log']
Dec  3 02:40:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:40:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:40:29 compute-0 nova_compute[351485]: 2025-12-03 02:40:29.554 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:29 compute-0 podman[158098]: time="2025-12-03T02:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:40:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec  3 02:40:29 compute-0 podman[488331]: 2025-12-03 02:40:29.883341347 +0000 UTC m=+0.104286334 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:40:29 compute-0 podman[488330]: 2025-12-03 02:40:29.889339716 +0000 UTC m=+0.131423390 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Dec  3 02:40:29 compute-0 podman[488337]: 2025-12-03 02:40:29.898752221 +0000 UTC m=+0.115867360 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 02:40:29 compute-0 podman[488332]: 2025-12-03 02:40:29.92919145 +0000 UTC m=+0.159326007 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, version=9.4)
Dec  3 02:40:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:29 compute-0 podman[488329]: 2025-12-03 02:40:29.949223085 +0000 UTC m=+0.184907719 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:40:30 compute-0 nova_compute[351485]: 2025-12-03 02:40:30.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:40:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:40:31 compute-0 openstack_network_exporter[368278]: ERROR   02:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:40:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:40:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:32 compute-0 nova_compute[351485]: 2025-12-03 02:40:32.183 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.035016) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634035053, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 975, "num_deletes": 251, "total_data_size": 1403388, "memory_usage": 1431856, "flush_reason": "Manual Compaction"}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634046478, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1379190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52343, "largest_seqno": 53317, "table_properties": {"data_size": 1374344, "index_size": 2434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10414, "raw_average_key_size": 19, "raw_value_size": 1364671, "raw_average_value_size": 2574, "num_data_blocks": 109, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729544, "oldest_key_time": 1764729544, "file_creation_time": 1764729634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 11585 microseconds, and 5662 cpu microseconds.
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.046599) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1379190 bytes OK
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.046618) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.048640) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.048655) EVENT_LOG_v1 {"time_micros": 1764729634048650, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.048671) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1398736, prev total WAL file size 1398736, number of live WAL files 2.
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.050697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1346KB)], [125(9411KB)]
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634050776, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11016538, "oldest_snapshot_seqno": -1}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6674 keys, 9270065 bytes, temperature: kUnknown
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634124208, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9270065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9227240, "index_size": 25048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175215, "raw_average_key_size": 26, "raw_value_size": 9108277, "raw_average_value_size": 1364, "num_data_blocks": 991, "num_entries": 6674, "num_filter_entries": 6674, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.124471) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9270065 bytes
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.126408) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.8 rd, 126.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(14.7) write-amplify(6.7) OK, records in: 7188, records dropped: 514 output_compression: NoCompression
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.126428) EVENT_LOG_v1 {"time_micros": 1764729634126419, "job": 76, "event": "compaction_finished", "compaction_time_micros": 73528, "compaction_time_cpu_micros": 47948, "output_level": 6, "num_output_files": 1, "total_output_size": 9270065, "num_input_records": 7188, "num_output_records": 6674, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634126949, "job": 76, "event": "table_file_deletion", "file_number": 127}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729634129843, "job": 76, "event": "table_file_deletion", "file_number": 125}
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.049471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:40:34 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:40:34.130576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:40:34 compute-0 nova_compute[351485]: 2025-12-03 02:40:34.557 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:37 compute-0 nova_compute[351485]: 2025-12-03 02:40:37.186 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:40:39 compute-0 nova_compute[351485]: 2025-12-03 02:40:39.559 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.632 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.633 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.633 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.634 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:40:40 compute-0 nova_compute[351485]: 2025-12-03 02:40:40.634 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:40:41 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:40:41 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749493240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.190 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.771 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.774 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.775 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:40:41 compute-0 nova_compute[351485]: 2025-12-03 02:40:41.776 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:40:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:42 compute-0 nova_compute[351485]: 2025-12-03 02:40:42.189 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:42 compute-0 nova_compute[351485]: 2025-12-03 02:40:42.713 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:40:42 compute-0 nova_compute[351485]: 2025-12-03 02:40:42.714 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.128 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing inventories for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.577 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating ProviderTree inventory for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.578 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Updating inventory in ProviderTree for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.596 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing aggregate associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.625 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Refreshing trait associations for resource provider 107397d2-51bc-4a03-bce4-7cd69319cf05, traits: HW_CPU_X86_SSE42,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 02:40:43 compute-0 nova_compute[351485]: 2025-12-03 02:40:43.646 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:40:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:40:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591209415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.129 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.144 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.169 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.172 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.173 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:40:44 compute-0 nova_compute[351485]: 2025-12-03 02:40:44.562 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.174 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.174 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.175 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.197 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.197 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:45 compute-0 nova_compute[351485]: 2025-12-03 02:40:45.198 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:46 compute-0 podman[488476]: 2025-12-03 02:40:46.864267103 +0000 UTC m=+0.102558495 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:40:46 compute-0 podman[488475]: 2025-12-03 02:40:46.898621812 +0000 UTC m=+0.143149100 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:40:46 compute-0 podman[488474]: 2025-12-03 02:40:46.901284107 +0000 UTC m=+0.152824493 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  3 02:40:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:40:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1655014276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:40:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:40:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1655014276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:40:47 compute-0 nova_compute[351485]: 2025-12-03 02:40:47.193 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:47 compute-0 nova_compute[351485]: 2025-12-03 02:40:47.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:47 compute-0 nova_compute[351485]: 2025-12-03 02:40:47.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:48 compute-0 nova_compute[351485]: 2025-12-03 02:40:48.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:49 compute-0 nova_compute[351485]: 2025-12-03 02:40:49.567 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:50 compute-0 nova_compute[351485]: 2025-12-03 02:40:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:50 compute-0 nova_compute[351485]: 2025-12-03 02:40:50.578 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:40:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:52 compute-0 nova_compute[351485]: 2025-12-03 02:40:52.196 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:52 compute-0 nova_compute[351485]: 2025-12-03 02:40:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:40:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:53 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:54 compute-0 nova_compute[351485]: 2025-12-03 02:40:54.568 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:55 compute-0 podman[488539]: 2025-12-03 02:40:55.88908036 +0000 UTC m=+0.147680268 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 02:40:55 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:57 compute-0 nova_compute[351485]: 2025-12-03 02:40:57.199 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:57 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:40:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:40:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:40:59 compute-0 nova_compute[351485]: 2025-12-03 02:40:59.572 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:40:59.678 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:40:59.679 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:40:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:40:59.679 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:40:59 compute-0 podman[158098]: time="2025-12-03T02:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:40:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8218 "" "Go-http-client/1.1"
Dec  3 02:40:59 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:00 compute-0 podman[488560]: 2025-12-03 02:41:00.867799828 +0000 UTC m=+0.093972852 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:41:00 compute-0 podman[488568]: 2025-12-03 02:41:00.911773819 +0000 UTC m=+0.124166595 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  3 02:41:00 compute-0 podman[488561]: 2025-12-03 02:41:00.913919899 +0000 UTC m=+0.135063611 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  3 02:41:00 compute-0 podman[488559]: 2025-12-03 02:41:00.936287311 +0000 UTC m=+0.170370779 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec  3 02:41:00 compute-0 podman[488558]: 2025-12-03 02:41:00.954919717 +0000 UTC m=+0.199910252 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:41:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:41:01 compute-0 openstack_network_exporter[368278]: ERROR   02:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:41:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:41:01 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:02 compute-0 nova_compute[351485]: 2025-12-03 02:41:02.202 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:03 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:04 compute-0 nova_compute[351485]: 2025-12-03 02:41:04.575 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:05 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:07 compute-0 nova_compute[351485]: 2025-12-03 02:41:07.205 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:07 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:09 compute-0 nova_compute[351485]: 2025-12-03 02:41:09.578 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:09 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:11 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:12 compute-0 nova_compute[351485]: 2025-12-03 02:41:12.208 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:13 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:14 compute-0 nova_compute[351485]: 2025-12-03 02:41:14.580 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:15 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:17 compute-0 nova_compute[351485]: 2025-12-03 02:41:17.211 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:17 compute-0 podman[488664]: 2025-12-03 02:41:17.884258939 +0000 UTC m=+0.121182941 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:41:17 compute-0 podman[488662]: 2025-12-03 02:41:17.887805339 +0000 UTC m=+0.140326171 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  3 02:41:17 compute-0 podman[488663]: 2025-12-03 02:41:17.899285163 +0000 UTC m=+0.147219226 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:41:17 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.519 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.519 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.519 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.521 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.523 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.524 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.526 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.528 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.529 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.530 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.536 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.539 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.540 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.541 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.542 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.543 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:41:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:41:19 compute-0 nova_compute[351485]: 2025-12-03 02:41:19.584 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:19 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:21 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec  3 02:41:22 compute-0 nova_compute[351485]: 2025-12-03 02:41:22.214 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:23 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec  3 02:41:24 compute-0 nova_compute[351485]: 2025-12-03 02:41:24.589 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:25 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Dec  3 02:41:26 compute-0 podman[488723]: 2025-12-03 02:41:26.880781268 +0000 UTC m=+0.135966788 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:41:27 compute-0 nova_compute[351485]: 2025-12-03 02:41:27.217 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:27 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:41:28
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 0f89b476-c266-4863-954e-27166a514892 does not exist
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 20f1ebe2-7c80-4205-89ef-2cc8b2cf4dc6 does not exist
Dec  3 02:41:28 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev e72c6680-acee-46a6-8265-90adc8307442 does not exist
Dec  3 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:41:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:41:28 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:41:29 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:41:29 compute-0 nova_compute[351485]: 2025-12-03 02:41:29.589 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:29 compute-0 podman[158098]: time="2025-12-03T02:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:41:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8220 "" "Go-http-client/1.1"
Dec  3 02:41:29 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.048829569 +0000 UTC m=+0.086163093 container create 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.019304725 +0000 UTC m=+0.056638249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:41:30 compute-0 systemd[1]: Started libpod-conmon-73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31.scope.
Dec  3 02:41:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.2072737 +0000 UTC m=+0.244607264 container init 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  3 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.229367223 +0000 UTC m=+0.266700737 container start 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.236281598 +0000 UTC m=+0.273615182 container attach 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:41:30 compute-0 sweet_curie[489027]: 167 167
Dec  3 02:41:30 compute-0 systemd[1]: libpod-73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31.scope: Deactivated successfully.
Dec  3 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.243865042 +0000 UTC m=+0.281198566 container died 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 02:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b27f78b2f3803d89acfe3075bb3cccfec1e4617734e8d5774151ceb321774cf-merged.mount: Deactivated successfully.
Dec  3 02:41:30 compute-0 podman[489013]: 2025-12-03 02:41:30.318697084 +0000 UTC m=+0.356030568 container remove 73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:41:30 compute-0 systemd[1]: libpod-conmon-73641ca0f5886b9c1c56dee33863cc5d64f3fce45abb728228679327a5f79c31.scope: Deactivated successfully.
Dec  3 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.591245366 +0000 UTC m=+0.090532736 container create ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.554640913 +0000 UTC m=+0.053928333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:41:30 compute-0 systemd[1]: Started libpod-conmon-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope.
Dec  3 02:41:30 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.770067472 +0000 UTC m=+0.269354902 container init ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.801137869 +0000 UTC m=+0.300425249 container start ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:41:30 compute-0 podman[489052]: 2025-12-03 02:41:30.807620282 +0000 UTC m=+0.306907702 container attach ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:41:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:41:31 compute-0 openstack_network_exporter[368278]: ERROR   02:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:41:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:41:31 compute-0 podman[489083]: 2025-12-03 02:41:31.886371583 +0000 UTC m=+0.104581952 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:41:31 compute-0 podman[489082]: 2025-12-03 02:41:31.897578019 +0000 UTC m=+0.129253228 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, version=9.6, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64)
Dec  3 02:41:31 compute-0 podman[489084]: 2025-12-03 02:41:31.910573696 +0000 UTC m=+0.117888898 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0)
Dec  3 02:41:31 compute-0 podman[489080]: 2025-12-03 02:41:31.92417713 +0000 UTC m=+0.156955840 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Dec  3 02:41:31 compute-0 podman[489085]: 2025-12-03 02:41:31.929081268 +0000 UTC m=+0.140004822 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:41:31 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  3 02:41:32 compute-0 modest_mclean[489068]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:41:32 compute-0 modest_mclean[489068]: --> relative data size: 1.0
Dec  3 02:41:32 compute-0 modest_mclean[489068]: --> All data devices are unavailable
Dec  3 02:41:32 compute-0 systemd[1]: libpod-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope: Deactivated successfully.
Dec  3 02:41:32 compute-0 systemd[1]: libpod-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope: Consumed 1.266s CPU time.
Dec  3 02:41:32 compute-0 podman[489052]: 2025-12-03 02:41:32.115936802 +0000 UTC m=+1.615224202 container died ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdd72cd677a1dae63363c9760b48253e067eabf9e7432b325ba09183eea7d545-merged.mount: Deactivated successfully.
Dec  3 02:41:32 compute-0 nova_compute[351485]: 2025-12-03 02:41:32.220 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:32 compute-0 podman[489052]: 2025-12-03 02:41:32.223882128 +0000 UTC m=+1.723169468 container remove ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:41:32 compute-0 systemd[1]: libpod-conmon-ffb104b47b17d82c7640e2cbc3bd8382101a87fdd467a9c33b2b81cd010eb467.scope: Deactivated successfully.
Dec  3 02:41:32 compute-0 nova_compute[351485]: 2025-12-03 02:41:32.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.493770124 +0000 UTC m=+0.101115084 container create dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  3 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.460413513 +0000 UTC m=+0.067758483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:41:33 compute-0 systemd[1]: Started libpod-conmon-dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32.scope.
Dec  3 02:41:33 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.657286238 +0000 UTC m=+0.264631248 container init dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.673979489 +0000 UTC m=+0.281324449 container start dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.680826033 +0000 UTC m=+0.288171023 container attach dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:41:33 compute-0 suspicious_heyrovsky[489362]: 167 167
Dec  3 02:41:33 compute-0 systemd[1]: libpod-dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32.scope: Deactivated successfully.
Dec  3 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.688327414 +0000 UTC m=+0.295672354 container died dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 02:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-590fe348a658b8de6f4d425f33acbc8b8dfc9d4555f6d606203919aade8f5d6a-merged.mount: Deactivated successfully.
Dec  3 02:41:33 compute-0 podman[489346]: 2025-12-03 02:41:33.758757812 +0000 UTC m=+0.366102762 container remove dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 02:41:33 compute-0 systemd[1]: libpod-conmon-dc3badaccd94c19198f75e5d893f80d5645f834c0ade0fca30abd5b82b153c32.scope: Deactivated successfully.
Dec  3 02:41:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:33 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec  3 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.064654594 +0000 UTC m=+0.096620867 container create a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.025395716 +0000 UTC m=+0.057362059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:41:34 compute-0 systemd[1]: Started libpod-conmon-a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9.scope.
Dec  3 02:41:34 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.225412341 +0000 UTC m=+0.257378694 container init a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.245060155 +0000 UTC m=+0.277026438 container start a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:41:34 compute-0 podman[489384]: 2025-12-03 02:41:34.250864029 +0000 UTC m=+0.282830402 container attach a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:41:34 compute-0 nova_compute[351485]: 2025-12-03 02:41:34.592 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]: {
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:    "0": [
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:        {
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "devices": [
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "/dev/loop3"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            ],
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_name": "ceph_lv0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_size": "21470642176",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "name": "ceph_lv0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "tags": {
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cluster_name": "ceph",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.crush_device_class": "",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.encrypted": "0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osd_id": "0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.type": "block",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.vdo": "0"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            },
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "type": "block",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "vg_name": "ceph_vg0"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:        }
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:    ],
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:    "1": [
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:        {
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "devices": [
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "/dev/loop4"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            ],
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_name": "ceph_lv1",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_size": "21470642176",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "name": "ceph_lv1",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "tags": {
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cluster_name": "ceph",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.crush_device_class": "",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.encrypted": "0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osd_id": "1",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.type": "block",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.vdo": "0"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            },
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "type": "block",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "vg_name": "ceph_vg1"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:        }
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:    ],
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:    "2": [
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:        {
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "devices": [
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "/dev/loop5"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            ],
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_name": "ceph_lv2",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_size": "21470642176",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "name": "ceph_lv2",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "tags": {
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.cluster_name": "ceph",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.crush_device_class": "",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.encrypted": "0",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osd_id": "2",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.type": "block",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:                "ceph.vdo": "0"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            },
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "type": "block",
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:            "vg_name": "ceph_vg2"
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:        }
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]:    ]
Dec  3 02:41:35 compute-0 clever_bhaskara[489399]: }
Dec  3 02:41:35 compute-0 systemd[1]: libpod-a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9.scope: Deactivated successfully.
Dec  3 02:41:35 compute-0 podman[489384]: 2025-12-03 02:41:35.071429605 +0000 UTC m=+1.103395898 container died a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 02:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8af90076fbc878cb00bab4858231a4f0d6d30cdbbbaad5b6fa10d7a62ff6d11-merged.mount: Deactivated successfully.
Dec  3 02:41:35 compute-0 podman[489384]: 2025-12-03 02:41:35.170054349 +0000 UTC m=+1.202020622 container remove a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  3 02:41:35 compute-0 systemd[1]: libpod-conmon-a126ad8af0468dfefa669f35e090d2ecf00c97837e8efd82a13d89dee0dfabe9.scope: Deactivated successfully.
Dec  3 02:41:35 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec  3 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.404095812 +0000 UTC m=+0.072007353 container create 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.371827712 +0000 UTC m=+0.039739293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:41:36 compute-0 systemd[1]: Started libpod-conmon-51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5.scope.
Dec  3 02:41:36 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.545677958 +0000 UTC m=+0.213589499 container init 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.563203902 +0000 UTC m=+0.231115433 container start 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.570313383 +0000 UTC m=+0.238224894 container attach 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  3 02:41:36 compute-0 optimistic_snyder[489573]: 167 167
Dec  3 02:41:36 compute-0 systemd[1]: libpod-51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5.scope: Deactivated successfully.
Dec  3 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.578640358 +0000 UTC m=+0.246551899 container died 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4f12fd02adc9c2e39fcf1cd58d081918a9b1baa14654f1435f6fbb09f3a29f-merged.mount: Deactivated successfully.
Dec  3 02:41:36 compute-0 podman[489557]: 2025-12-03 02:41:36.65241045 +0000 UTC m=+0.320321981 container remove 51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:41:36 compute-0 systemd[1]: libpod-conmon-51936ecfbe2d5f3ba01400fb57371281ff5347fb836967b69a54574c8cba01c5.scope: Deactivated successfully.
Dec  3 02:41:36 compute-0 podman[489598]: 2025-12-03 02:41:36.921598966 +0000 UTC m=+0.090978108 container create 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:41:36 compute-0 podman[489598]: 2025-12-03 02:41:36.889387157 +0000 UTC m=+0.058766349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:41:37 compute-0 systemd[1]: Started libpod-conmon-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope.
Dec  3 02:41:37 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:41:37 compute-0 podman[489598]: 2025-12-03 02:41:37.114122359 +0000 UTC m=+0.283501551 container init 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:41:37 compute-0 podman[489598]: 2025-12-03 02:41:37.154394206 +0000 UTC m=+0.323773348 container start 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:41:37 compute-0 podman[489598]: 2025-12-03 02:41:37.161734193 +0000 UTC m=+0.331113345 container attach 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:41:37 compute-0 nova_compute[351485]: 2025-12-03 02:41:37.225 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:37 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec  3 02:41:38 compute-0 kind_banzai[489613]: {
Dec  3 02:41:38 compute-0 kind_banzai[489613]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "osd_id": 2,
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "type": "bluestore"
Dec  3 02:41:38 compute-0 kind_banzai[489613]:    },
Dec  3 02:41:38 compute-0 kind_banzai[489613]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "osd_id": 1,
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "type": "bluestore"
Dec  3 02:41:38 compute-0 kind_banzai[489613]:    },
Dec  3 02:41:38 compute-0 kind_banzai[489613]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "osd_id": 0,
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:41:38 compute-0 kind_banzai[489613]:        "type": "bluestore"
Dec  3 02:41:38 compute-0 kind_banzai[489613]:    }
Dec  3 02:41:38 compute-0 kind_banzai[489613]: }
Dec  3 02:41:38 compute-0 systemd[1]: libpod-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope: Deactivated successfully.
Dec  3 02:41:38 compute-0 systemd[1]: libpod-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope: Consumed 1.294s CPU time.
Dec  3 02:41:38 compute-0 podman[489646]: 2025-12-03 02:41:38.539634887 +0000 UTC m=+0.057340299 container died 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-abdf2fa4b0cf2672f3ac81bdd7ea37ca381b7df6d7bf6017fd7d1499df3c01cc-merged.mount: Deactivated successfully.
Dec  3 02:41:38 compute-0 podman[489646]: 2025-12-03 02:41:38.638873688 +0000 UTC m=+0.156579100 container remove 3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:41:38 compute-0 systemd[1]: libpod-conmon-3c207baae55433121d7c9636dbefcfa4fb05f871b6c377b4fb01ae01561a8ddc.scope: Deactivated successfully.
Dec  3 02:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:41:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:41:38 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:41:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 25c92181-1724-4c33-8b90-457e3a6c724b does not exist
Dec  3 02:41:38 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 89ae2803-1f21-4cb7-b069-70f5c6aa0484 does not exist
Dec  3 02:41:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:41:39 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:41:39 compute-0 nova_compute[351485]: 2025-12-03 02:41:39.598 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:39 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Dec  3 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.613 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.613 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.614 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.614 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:41:41 compute-0 nova_compute[351485]: 2025-12-03 02:41:41.615 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:41:41 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:41:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174112701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.148 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.231 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.627 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.629 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3933MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.629 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.630 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.716 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.716 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:41:42 compute-0 nova_compute[351485]: 2025-12-03 02:41:42.742 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:41:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479137495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.238 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.253 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.272 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.275 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:41:43 compute-0 nova_compute[351485]: 2025-12-03 02:41:43.276 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:41:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:43 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.276 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.277 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.277 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.316 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.317 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:44 compute-0 nova_compute[351485]: 2025-12-03 02:41:44.602 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:45 compute-0 nova_compute[351485]: 2025-12-03 02:41:45.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:45 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:46 compute-0 nova_compute[351485]: 2025-12-03 02:41:46.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:41:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594545969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:41:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:41:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594545969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:41:47 compute-0 nova_compute[351485]: 2025-12-03 02:41:47.234 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:47 compute-0 nova_compute[351485]: 2025-12-03 02:41:47.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:47 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:48 compute-0 podman[489756]: 2025-12-03 02:41:48.87365695 +0000 UTC m=+0.111338273 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  3 02:41:48 compute-0 podman[489754]: 2025-12-03 02:41:48.894395066 +0000 UTC m=+0.139963471 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 02:41:48 compute-0 podman[489755]: 2025-12-03 02:41:48.927423348 +0000 UTC m=+0.168566788 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  3 02:41:49 compute-0 nova_compute[351485]: 2025-12-03 02:41:49.570 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:49 compute-0 nova_compute[351485]: 2025-12-03 02:41:49.606 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:49 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:51 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:52 compute-0 nova_compute[351485]: 2025-12-03 02:41:52.238 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:52 compute-0 nova_compute[351485]: 2025-12-03 02:41:52.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:52 compute-0 nova_compute[351485]: 2025-12-03 02:41:52.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:41:53 compute-0 nova_compute[351485]: 2025-12-03 02:41:53.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:41:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:54 compute-0 nova_compute[351485]: 2025-12-03 02:41:54.610 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:57 compute-0 nova_compute[351485]: 2025-12-03 02:41:57.242 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:57 compute-0 podman[489817]: 2025-12-03 02:41:57.904752223 +0000 UTC m=+0.157919427 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:41:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:41:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:41:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:41:59 compute-0 nova_compute[351485]: 2025-12-03 02:41:59.614 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:41:59.679 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:41:59.680 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:41:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:41:59.680 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:41:59 compute-0 podman[158098]: time="2025-12-03T02:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:41:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec  3 02:42:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:42:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:42:01 compute-0 openstack_network_exporter[368278]: ERROR   02:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:42:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:42:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:02 compute-0 nova_compute[351485]: 2025-12-03 02:42:02.245 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:02 compute-0 podman[489839]: 2025-12-03 02:42:02.878744518 +0000 UTC m=+0.106223999 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public)
Dec  3 02:42:02 compute-0 podman[489843]: 2025-12-03 02:42:02.893206476 +0000 UTC m=+0.111978681 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:42:02 compute-0 podman[489838]: 2025-12-03 02:42:02.893607108 +0000 UTC m=+0.124942137 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:42:02 compute-0 podman[489837]: 2025-12-03 02:42:02.916890025 +0000 UTC m=+0.155549581 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec  3 02:42:02 compute-0 podman[489836]: 2025-12-03 02:42:02.925106446 +0000 UTC m=+0.169715280 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  3 02:42:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:04 compute-0 nova_compute[351485]: 2025-12-03 02:42:04.617 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:07 compute-0 nova_compute[351485]: 2025-12-03 02:42:07.248 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:09 compute-0 nova_compute[351485]: 2025-12-03 02:42:09.620 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:12 compute-0 nova_compute[351485]: 2025-12-03 02:42:12.251 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:14 compute-0 nova_compute[351485]: 2025-12-03 02:42:14.632 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:17 compute-0 nova_compute[351485]: 2025-12-03 02:42:17.253 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:19 compute-0 nova_compute[351485]: 2025-12-03 02:42:19.645 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:19 compute-0 podman[489937]: 2025-12-03 02:42:19.855097095 +0000 UTC m=+0.114752009 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:42:19 compute-0 podman[489939]: 2025-12-03 02:42:19.866510877 +0000 UTC m=+0.108504013 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:42:19 compute-0 podman[489938]: 2025-12-03 02:42:19.89707082 +0000 UTC m=+0.140155176 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:42:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:22 compute-0 nova_compute[351485]: 2025-12-03 02:42:22.256 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:24 compute-0 nova_compute[351485]: 2025-12-03 02:42:24.649 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:27 compute-0 nova_compute[351485]: 2025-12-03 02:42:27.259 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:42:28
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes']
Dec  3 02:42:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:42:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.830427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748830619, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1116, "num_deletes": 250, "total_data_size": 1661997, "memory_usage": 1687128, "flush_reason": "Manual Compaction"}
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748843446, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 976118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53318, "largest_seqno": 54433, "table_properties": {"data_size": 971995, "index_size": 1710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10895, "raw_average_key_size": 20, "raw_value_size": 963036, "raw_average_value_size": 1823, "num_data_blocks": 78, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729635, "oldest_key_time": 1764729635, "file_creation_time": 1764729748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 13428 microseconds, and 7542 cpu microseconds.
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.843871) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 976118 bytes OK
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.843899) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.846315) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.846336) EVENT_LOG_v1 {"time_micros": 1764729748846329, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.846360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1656863, prev total WAL file size 1656863, number of live WAL files 2.
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.848218) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353035' seq:0, type:0; will stop at (end)
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(953KB)], [128(9052KB)]
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748848303, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10246183, "oldest_snapshot_seqno": -1}
Dec  3 02:42:28 compute-0 podman[489996]: 2025-12-03 02:42:28.887596009 +0000 UTC m=+0.135200876 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 6741 keys, 7616915 bytes, temperature: kUnknown
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748908895, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 7616915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7576959, "index_size": 21987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 176737, "raw_average_key_size": 26, "raw_value_size": 7460042, "raw_average_value_size": 1106, "num_data_blocks": 867, "num_entries": 6741, "num_filter_entries": 6741, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.909093) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 7616915 bytes
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.911033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(18.3) write-amplify(7.8) OK, records in: 7202, records dropped: 461 output_compression: NoCompression
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.911052) EVENT_LOG_v1 {"time_micros": 1764729748911043, "job": 78, "event": "compaction_finished", "compaction_time_micros": 60647, "compaction_time_cpu_micros": 37086, "output_level": 6, "num_output_files": 1, "total_output_size": 7616915, "num_input_records": 7202, "num_output_records": 6741, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748911374, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729748913061, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.847468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:42:28 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:42:28.913381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:42:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:42:29 compute-0 nova_compute[351485]: 2025-12-03 02:42:29.653 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:29 compute-0 podman[158098]: time="2025-12-03T02:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:42:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8214 "" "Go-http-client/1.1"
Dec  3 02:42:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:42:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:42:31 compute-0 openstack_network_exporter[368278]: ERROR   02:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:42:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:42:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:32 compute-0 nova_compute[351485]: 2025-12-03 02:42:32.262 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:33 compute-0 podman[490019]: 2025-12-03 02:42:33.870199175 +0000 UTC m=+0.100795675 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:42:33 compute-0 podman[490026]: 2025-12-03 02:42:33.889237123 +0000 UTC m=+0.109587574 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:42:33 compute-0 podman[490020]: 2025-12-03 02:42:33.895981403 +0000 UTC m=+0.120938894 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 02:42:33 compute-0 podman[490018]: 2025-12-03 02:42:33.897071354 +0000 UTC m=+0.137218524 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Dec  3 02:42:33 compute-0 podman[490017]: 2025-12-03 02:42:33.939423859 +0000 UTC m=+0.185342862 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  3 02:42:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:34 compute-0 nova_compute[351485]: 2025-12-03 02:42:34.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:34 compute-0 nova_compute[351485]: 2025-12-03 02:42:34.657 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:37 compute-0 nova_compute[351485]: 2025-12-03 02:42:37.265 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:42:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:42:39 compute-0 nova_compute[351485]: 2025-12-03 02:42:39.659 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:42:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 700d9fe8-8cb3-4633-bcb0-1c5e954366d1 does not exist
Dec  3 02:42:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b3ad8527-ef30-4ef4-8379-af37c8c1102c does not exist
Dec  3 02:42:40 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 713bebe8-5390-41bc-a7f9-35ee3694797c does not exist
Dec  3 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:42:40 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:42:40 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:42:41 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.610 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.611 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.611 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:42:41 compute-0 nova_compute[351485]: 2025-12-03 02:42:41.612 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.615332451 +0000 UTC m=+0.087496280 container create ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  3 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.588129813 +0000 UTC m=+0.060293622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:42:41 compute-0 systemd[1]: Started libpod-conmon-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope.
Dec  3 02:42:41 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.793082837 +0000 UTC m=+0.265246646 container init ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.807182645 +0000 UTC m=+0.279346444 container start ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  3 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.813670788 +0000 UTC m=+0.285834607 container attach ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 02:42:41 compute-0 festive_murdock[490400]: 167 167
Dec  3 02:42:41 compute-0 systemd[1]: libpod-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope: Deactivated successfully.
Dec  3 02:42:41 compute-0 conmon[490400]: conmon ce69163649fbca7cfcdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope/container/memory.events
Dec  3 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.821984883 +0000 UTC m=+0.294148712 container died ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fbb1fb5092a9c4b6f1798b2d0a963437d2fd5814972b2314a05e0788f16c854-merged.mount: Deactivated successfully.
Dec  3 02:42:41 compute-0 podman[490385]: 2025-12-03 02:42:41.912728174 +0000 UTC m=+0.384891973 container remove ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_murdock, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:42:41 compute-0 systemd[1]: libpod-conmon-ce69163649fbca7cfcdd476cab32948601439675f3ecbe4bb42171ccfe5239f0.scope: Deactivated successfully.
Dec  3 02:42:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:42 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:42:42 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818334107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.116 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.201484032 +0000 UTC m=+0.112785364 container create 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec  3 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.16385437 +0000 UTC m=+0.075155842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:42:42 compute-0 systemd[1]: Started libpod-conmon-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope.
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.268 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:42 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.329092933 +0000 UTC m=+0.240394335 container init 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.35058109 +0000 UTC m=+0.261882442 container start 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:42:42 compute-0 podman[490442]: 2025-12-03 02:42:42.355268692 +0000 UTC m=+0.266570064 container attach 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.529 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.531 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3923MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.532 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.532 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.682 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.683 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:42:42 compute-0 nova_compute[351485]: 2025-12-03 02:42:42.703 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:42:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:42:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771843871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.298 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.312 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.349 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.352 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:42:43 compute-0 nova_compute[351485]: 2025-12-03 02:42:43.353 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:42:43 compute-0 boring_solomon[490460]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:42:43 compute-0 boring_solomon[490460]: --> relative data size: 1.0
Dec  3 02:42:43 compute-0 boring_solomon[490460]: --> All data devices are unavailable
Dec  3 02:42:43 compute-0 systemd[1]: libpod-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope: Deactivated successfully.
Dec  3 02:42:43 compute-0 systemd[1]: libpod-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope: Consumed 1.279s CPU time.
Dec  3 02:42:43 compute-0 podman[490511]: 2025-12-03 02:42:43.766719622 +0000 UTC m=+0.048241502 container died 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8074d7daa631630a065e53d80d5bda280dfe243ed83e9440005e42d76ef612-merged.mount: Deactivated successfully.
Dec  3 02:42:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:43 compute-0 podman[490511]: 2025-12-03 02:42:43.882893631 +0000 UTC m=+0.164415461 container remove 2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:42:43 compute-0 systemd[1]: libpod-conmon-2b3cf780b177289831f7ab8d7125830700a36d7fc86034e6b9b5c997ce13b7f2.scope: Deactivated successfully.
Dec  3 02:42:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.354 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.355 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.355 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.371 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 02:42:44 compute-0 nova_compute[351485]: 2025-12-03 02:42:44.663 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.057134478 +0000 UTC m=+0.086094321 container create 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.020805062 +0000 UTC m=+0.049764965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:42:45 compute-0 systemd[1]: Started libpod-conmon-235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b.scope.
Dec  3 02:42:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.198138557 +0000 UTC m=+0.227098470 container init 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.208609522 +0000 UTC m=+0.237569335 container start 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.213769388 +0000 UTC m=+0.242729251 container attach 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:42:45 compute-0 magical_boyd[490680]: 167 167
Dec  3 02:42:45 compute-0 systemd[1]: libpod-235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b.scope: Deactivated successfully.
Dec  3 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.216581277 +0000 UTC m=+0.245541090 container died 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 02:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb206d9fe45f4069c29135de71bc6a798d77939322bd7d54f3e6aee2e9cb7e7-merged.mount: Deactivated successfully.
Dec  3 02:42:45 compute-0 podman[490664]: 2025-12-03 02:42:45.284170355 +0000 UTC m=+0.313130178 container remove 235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:42:45 compute-0 systemd[1]: libpod-conmon-235a40a5ac741947d6032fb146c0906d56290f195dd98c5e8783e8b5ef4cc20b.scope: Deactivated successfully.
Dec  3 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.564706241 +0000 UTC m=+0.083682502 container create 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 02:42:45 compute-0 nova_compute[351485]: 2025-12-03 02:42:45.595 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.536890026 +0000 UTC m=+0.055866297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:42:45 compute-0 systemd[1]: Started libpod-conmon-0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50.scope.
Dec  3 02:42:45 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.745678228 +0000 UTC m=+0.264654539 container init 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.772795514 +0000 UTC m=+0.291771765 container start 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 02:42:45 compute-0 podman[490702]: 2025-12-03 02:42:45.779946485 +0000 UTC m=+0.298922796 container attach 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 02:42:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:46 compute-0 nova_compute[351485]: 2025-12-03 02:42:46.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:46 compute-0 admiring_curie[490718]: {
Dec  3 02:42:46 compute-0 admiring_curie[490718]:    "0": [
Dec  3 02:42:46 compute-0 admiring_curie[490718]:        {
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "devices": [
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "/dev/loop3"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            ],
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_name": "ceph_lv0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_size": "21470642176",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "name": "ceph_lv0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "tags": {
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cluster_name": "ceph",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.crush_device_class": "",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.encrypted": "0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osd_id": "0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.type": "block",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.vdo": "0"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            },
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "type": "block",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "vg_name": "ceph_vg0"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:        }
Dec  3 02:42:46 compute-0 admiring_curie[490718]:    ],
Dec  3 02:42:46 compute-0 admiring_curie[490718]:    "1": [
Dec  3 02:42:46 compute-0 admiring_curie[490718]:        {
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "devices": [
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "/dev/loop4"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            ],
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_name": "ceph_lv1",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_size": "21470642176",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "name": "ceph_lv1",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "tags": {
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cluster_name": "ceph",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.crush_device_class": "",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.encrypted": "0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osd_id": "1",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.type": "block",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.vdo": "0"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            },
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "type": "block",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "vg_name": "ceph_vg1"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:        }
Dec  3 02:42:46 compute-0 admiring_curie[490718]:    ],
Dec  3 02:42:46 compute-0 admiring_curie[490718]:    "2": [
Dec  3 02:42:46 compute-0 admiring_curie[490718]:        {
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "devices": [
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "/dev/loop5"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            ],
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_name": "ceph_lv2",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_size": "21470642176",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "name": "ceph_lv2",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "tags": {
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.cluster_name": "ceph",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.crush_device_class": "",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.encrypted": "0",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osd_id": "2",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.type": "block",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:                "ceph.vdo": "0"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            },
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "type": "block",
Dec  3 02:42:46 compute-0 admiring_curie[490718]:            "vg_name": "ceph_vg2"
Dec  3 02:42:46 compute-0 admiring_curie[490718]:        }
Dec  3 02:42:46 compute-0 admiring_curie[490718]:    ]
Dec  3 02:42:46 compute-0 admiring_curie[490718]: }
Dec  3 02:42:46 compute-0 systemd[1]: libpod-0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50.scope: Deactivated successfully.
Dec  3 02:42:46 compute-0 podman[490702]: 2025-12-03 02:42:46.72729279 +0000 UTC m=+1.246269041 container died 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-817f7a8c6b86f9fb534afb624d79aeea47ee5e063a712426713060a588243639-merged.mount: Deactivated successfully.
Dec  3 02:42:46 compute-0 podman[490702]: 2025-12-03 02:42:46.845946998 +0000 UTC m=+1.364923249 container remove 0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 02:42:46 compute-0 systemd[1]: libpod-conmon-0e456aaa0160e17d709cd0fbc29ca5422a39c1d0846926280783493f2935ec50.scope: Deactivated successfully.
Dec  3 02:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:42:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3200321659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:42:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:42:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3200321659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:42:47 compute-0 nova_compute[351485]: 2025-12-03 02:42:47.271 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.065486232 +0000 UTC m=+0.104980173 container create 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  3 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.016009576 +0000 UTC m=+0.055503587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:42:48 compute-0 systemd[1]: Started libpod-conmon-5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539.scope.
Dec  3 02:42:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.202698735 +0000 UTC m=+0.242192696 container init 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.222332709 +0000 UTC m=+0.261826640 container start 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.229621714 +0000 UTC m=+0.269115705 container attach 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:42:48 compute-0 sleepy_colden[490894]: 167 167
Dec  3 02:42:48 compute-0 systemd[1]: libpod-5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539.scope: Deactivated successfully.
Dec  3 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.235254963 +0000 UTC m=+0.274748894 container died 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  3 02:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e5035cf03ee1529707d1d7af1ef9fadbe1e1f962aa926f3081958c8266ab1f-merged.mount: Deactivated successfully.
Dec  3 02:42:48 compute-0 podman[490878]: 2025-12-03 02:42:48.320429477 +0000 UTC m=+0.359923418 container remove 5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_colden, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 02:42:48 compute-0 systemd[1]: libpod-conmon-5ae51d004d9a9609c1f1d2dbfcbe4247da8d3fc4d6eafb1ef472857122113539.scope: Deactivated successfully.
Dec  3 02:42:48 compute-0 nova_compute[351485]: 2025-12-03 02:42:48.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.647306961 +0000 UTC m=+0.096249027 container create c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.61607708 +0000 UTC m=+0.065019176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:42:48 compute-0 systemd[1]: Started libpod-conmon-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope.
Dec  3 02:42:48 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:42:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.834499834 +0000 UTC m=+0.283441950 container init c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.8573813 +0000 UTC m=+0.306323356 container start c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec  3 02:42:48 compute-0 podman[490918]: 2025-12-03 02:42:48.864708897 +0000 UTC m=+0.313651013 container attach c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:42:49 compute-0 nova_compute[351485]: 2025-12-03 02:42:49.666 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]: {
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "osd_id": 2,
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "type": "bluestore"
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:    },
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "osd_id": 1,
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "type": "bluestore"
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:    },
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "osd_id": 0,
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:        "type": "bluestore"
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]:    }
Dec  3 02:42:50 compute-0 eager_heyrovsky[490933]: }
Dec  3 02:42:50 compute-0 systemd[1]: libpod-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope: Deactivated successfully.
Dec  3 02:42:50 compute-0 podman[490918]: 2025-12-03 02:42:50.148985619 +0000 UTC m=+1.597927685 container died c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 02:42:50 compute-0 systemd[1]: libpod-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope: Consumed 1.282s CPU time.
Dec  3 02:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-263b6b05bdce39395e5fbd33234de832984777082ace5c3cea3476d393397b47-merged.mount: Deactivated successfully.
Dec  3 02:42:50 compute-0 podman[490918]: 2025-12-03 02:42:50.256697978 +0000 UTC m=+1.705640014 container remove c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:42:50 compute-0 systemd[1]: libpod-conmon-c5d286456b9b9e57f239e8aae2b3144a5d0c8f120e10714cf8b0453a8ba54bc0.scope: Deactivated successfully.
Dec  3 02:42:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:42:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:42:50 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:42:50 compute-0 podman[490978]: 2025-12-03 02:42:50.322815714 +0000 UTC m=+0.119416841 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 02:42:50 compute-0 podman[490970]: 2025-12-03 02:42:50.323985287 +0000 UTC m=+0.113503414 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 02:42:50 compute-0 podman[490979]: 2025-12-03 02:42:50.325099629 +0000 UTC m=+0.115804879 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:42:50 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:42:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d7409c70-f8b7-4d40-a0b0-4580e9765683 does not exist
Dec  3 02:42:50 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev bcf0ab97-a8c2-40b2-9c8f-b5b4fdb73957 does not exist
Dec  3 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.576 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 02:42:50 compute-0 nova_compute[351485]: 2025-12-03 02:42:50.599 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 02:42:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:42:51 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:42:51 compute-0 nova_compute[351485]: 2025-12-03 02:42:51.594 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:52 compute-0 nova_compute[351485]: 2025-12-03 02:42:52.276 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:53 compute-0 nova_compute[351485]: 2025-12-03 02:42:53.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:53 compute-0 nova_compute[351485]: 2025-12-03 02:42:53.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:42:53 compute-0 nova_compute[351485]: 2025-12-03 02:42:53.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:42:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:54 compute-0 nova_compute[351485]: 2025-12-03 02:42:54.669 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:57 compute-0 nova_compute[351485]: 2025-12-03 02:42:57.279 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:42:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:42:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:42:59 compute-0 nova_compute[351485]: 2025-12-03 02:42:59.673 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:42:59.680 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:42:59.681 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:42:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:42:59.681 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:42:59 compute-0 podman[158098]: time="2025-12-03T02:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:42:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8220 "" "Go-http-client/1.1"
Dec  3 02:42:59 compute-0 podman[491088]: 2025-12-03 02:42:59.887198198 +0000 UTC m=+0.131856002 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:43:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:43:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:43:01 compute-0 openstack_network_exporter[368278]: ERROR   02:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:43:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:43:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:02 compute-0 nova_compute[351485]: 2025-12-03 02:43:02.282 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:04 compute-0 nova_compute[351485]: 2025-12-03 02:43:04.677 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:04 compute-0 podman[491109]: 2025-12-03 02:43:04.882171854 +0000 UTC m=+0.125357178 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  3 02:43:04 compute-0 podman[491112]: 2025-12-03 02:43:04.887189256 +0000 UTC m=+0.109790879 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:43:04 compute-0 podman[491111]: 2025-12-03 02:43:04.9025951 +0000 UTC m=+0.133565759 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  3 02:43:04 compute-0 podman[491110]: 2025-12-03 02:43:04.907308313 +0000 UTC m=+0.142421049 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:43:04 compute-0 podman[491108]: 2025-12-03 02:43:04.907633112 +0000 UTC m=+0.153546433 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  3 02:43:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:07 compute-0 nova_compute[351485]: 2025-12-03 02:43:07.286 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:09 compute-0 nova_compute[351485]: 2025-12-03 02:43:09.680 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:12 compute-0 nova_compute[351485]: 2025-12-03 02:43:12.289 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:14 compute-0 nova_compute[351485]: 2025-12-03 02:43:14.684 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:17 compute-0 nova_compute[351485]: 2025-12-03 02:43:17.293 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.519 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.522 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.522 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.523 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f95e7dd37d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.524 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c780e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7ecd310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.525 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95ea385bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.526 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.527 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e6c78440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.527 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f95e6c78050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f95e7dd3860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.529 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f95e7deebd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f95e6c78140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f95e7dd3e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f95e7dd18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f95e7dd3d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f95e7dd3260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f95e7dd3830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f95e7dd3380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.532 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f95e7dd33e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f95e6c78410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.533 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.528 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.534 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd34d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e9d6a5a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.536 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e99e15e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd2600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f95e7dd3440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f95e7dd34a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f95e7d39040>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.539 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.537 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f95e7dd3fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f95e67758b0>] with cache [{}], pollster history [{'memory.usage': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f95e7dd3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f95e7dd3d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f95e7dd1850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f95e7dd3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f95e7dd3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f95e7dd1880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f95e7dd3dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f95e7dd35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f95e7dd3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f95e7dd3f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f95e7d97c50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.548 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 ceilometer_agent_compute[363035]: 2025-12-03 02:43:19.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  3 02:43:19 compute-0 nova_compute[351485]: 2025-12-03 02:43:19.688 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:20 compute-0 podman[491215]: 2025-12-03 02:43:20.890400461 +0000 UTC m=+0.130274238 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 02:43:20 compute-0 podman[491217]: 2025-12-03 02:43:20.896322098 +0000 UTC m=+0.122159669 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:43:20 compute-0 podman[491216]: 2025-12-03 02:43:20.913893434 +0000 UTC m=+0.150990022 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  3 02:43:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:22 compute-0 nova_compute[351485]: 2025-12-03 02:43:22.296 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:24 compute-0 nova_compute[351485]: 2025-12-03 02:43:24.690 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:27 compute-0 nova_compute[351485]: 2025-12-03 02:43:27.299 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:27 compute-0 nova_compute[351485]: 2025-12-03 02:43:27.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:43:28
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta']
Dec  3 02:43:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:43:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:43:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:43:29 compute-0 nova_compute[351485]: 2025-12-03 02:43:29.692 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:29 compute-0 podman[158098]: time="2025-12-03T02:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:43:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:43:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8204 "" "Go-http-client/1.1"
Dec  3 02:43:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:30 compute-0 podman[491270]: 2025-12-03 02:43:30.870583781 +0000 UTC m=+0.120860141 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  3 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:43:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:43:31 compute-0 openstack_network_exporter[368278]: ERROR   02:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:43:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:43:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:32 compute-0 nova_compute[351485]: 2025-12-03 02:43:32.301 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:34 compute-0 nova_compute[351485]: 2025-12-03 02:43:34.695 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:35 compute-0 podman[491295]: 2025-12-03 02:43:35.874757909 +0000 UTC m=+0.111573320 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64)
Dec  3 02:43:35 compute-0 podman[491301]: 2025-12-03 02:43:35.882978011 +0000 UTC m=+0.100558059 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 02:43:35 compute-0 podman[491293]: 2025-12-03 02:43:35.89428164 +0000 UTC m=+0.134348433 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Dec  3 02:43:35 compute-0 podman[491294]: 2025-12-03 02:43:35.899688842 +0000 UTC m=+0.132800778 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  3 02:43:35 compute-0 podman[491292]: 2025-12-03 02:43:35.924494002 +0000 UTC m=+0.172578411 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  3 02:43:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:36 compute-0 nova_compute[351485]: 2025-12-03 02:43:36.594 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:37 compute-0 nova_compute[351485]: 2025-12-03 02:43:37.304 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:43:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:43:39 compute-0 nova_compute[351485]: 2025-12-03 02:43:39.699 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.307 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.575 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.621 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.622 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.623 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:43:42 compute-0 nova_compute[351485]: 2025-12-03 02:43:42.623 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:43:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:43:43 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097921568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.147 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.751 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.754 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3954MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.754 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.755 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:43:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.854 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.855 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:43:43 compute-0 nova_compute[351485]: 2025-12-03 02:43:43.884 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:43:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:43:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117918483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.497 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.506 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.527 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.528 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.528 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:43:44 compute-0 nova_compute[351485]: 2025-12-03 02:43:44.703 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.530 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.530 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.531 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.550 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:43:46 compute-0 nova_compute[351485]: 2025-12-03 02:43:46.551 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:43:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1881994563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:43:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:43:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1881994563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:43:47 compute-0 nova_compute[351485]: 2025-12-03 02:43:47.310 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:47 compute-0 podman[158098]: time="2025-12-03T02:43:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:43:48 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43258 "" "Go-http-client/1.1"
Dec  3 02:43:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:48 compute-0 nova_compute[351485]: 2025-12-03 02:43:48.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:48 compute-0 nova_compute[351485]: 2025-12-03 02:43:48.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:49 compute-0 nova_compute[351485]: 2025-12-03 02:43:49.706 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:50 compute-0 nova_compute[351485]: 2025-12-03 02:43:50.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:51 compute-0 podman[491520]: 2025-12-03 02:43:51.135338077 +0000 UTC m=+0.098438979 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:43:51 compute-0 podman[491518]: 2025-12-03 02:43:51.157237015 +0000 UTC m=+0.129630789 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  3 02:43:51 compute-0 podman[491519]: 2025-12-03 02:43:51.168695838 +0000 UTC m=+0.133961361 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  3 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:43:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev b2d5e90f-9e80-4472-91ca-6c1fab50ff5b does not exist
Dec  3 02:43:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 253330e7-f990-43ec-954c-e9c367c9a1a0 does not exist
Dec  3 02:43:51 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 1b0f409e-073b-4933-8a1e-06fafcee3036 does not exist
Dec  3 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:43:51 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:43:51 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:43:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:43:52 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:43:52 compute-0 nova_compute[351485]: 2025-12-03 02:43:52.313 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:52 compute-0 podman[491771]: 2025-12-03 02:43:52.826920893 +0000 UTC m=+0.071859589 container create 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:43:52 compute-0 podman[491771]: 2025-12-03 02:43:52.791575545 +0000 UTC m=+0.036514271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:43:52 compute-0 systemd[1]: Started libpod-conmon-220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e.scope.
Dec  3 02:43:52 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:43:52 compute-0 podman[491771]: 2025-12-03 02:43:52.992140095 +0000 UTC m=+0.237078851 container init 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec  3 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.00364655 +0000 UTC m=+0.248585256 container start 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.010715049 +0000 UTC m=+0.255653745 container attach 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:43:53 compute-0 bold_einstein[491786]: 167 167
Dec  3 02:43:53 compute-0 systemd[1]: libpod-220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e.scope: Deactivated successfully.
Dec  3 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.017866021 +0000 UTC m=+0.262804717 container died 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e7be88b0be2d0f71c4269f4a018423cec3d14ce7eab737d2f22799440abf8c-merged.mount: Deactivated successfully.
Dec  3 02:43:53 compute-0 podman[491771]: 2025-12-03 02:43:53.094848863 +0000 UTC m=+0.339787529 container remove 220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_einstein, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:43:53 compute-0 systemd[1]: libpod-conmon-220c587150cc4eb9c1599c2ae4f46e3db57441e808a81d515cee76ffbf8d101e.scope: Deactivated successfully.
Dec  3 02:43:53 compute-0 nova_compute[351485]: 2025-12-03 02:43:53.345 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.382892652 +0000 UTC m=+0.088662183 container create 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.35482467 +0000 UTC m=+0.060594251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:43:53 compute-0 systemd[1]: Started libpod-conmon-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope.
Dec  3 02:43:53 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.550297866 +0000 UTC m=+0.256067467 container init 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.58018578 +0000 UTC m=+0.285955331 container start 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Dec  3 02:43:53 compute-0 podman[491809]: 2025-12-03 02:43:53.587338842 +0000 UTC m=+0.293108413 container attach 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:43:53 compute-0 nova_compute[351485]: 2025-12-03 02:43:53.588 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:53 compute-0 nova_compute[351485]: 2025-12-03 02:43:53.589 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:43:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:54 compute-0 nova_compute[351485]: 2025-12-03 02:43:54.578 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:43:54 compute-0 nova_compute[351485]: 2025-12-03 02:43:54.710 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:54 compute-0 focused_mclean[491824]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:43:54 compute-0 focused_mclean[491824]: --> relative data size: 1.0
Dec  3 02:43:54 compute-0 focused_mclean[491824]: --> All data devices are unavailable
Dec  3 02:43:54 compute-0 systemd[1]: libpod-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope: Deactivated successfully.
Dec  3 02:43:54 compute-0 systemd[1]: libpod-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope: Consumed 1.280s CPU time.
Dec  3 02:43:54 compute-0 podman[491809]: 2025-12-03 02:43:54.916177751 +0000 UTC m=+1.621947322 container died 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a911859b1dc6fbcca357c7398169481a4130068ee53506847abe65666fee12fc-merged.mount: Deactivated successfully.
Dec  3 02:43:55 compute-0 podman[491809]: 2025-12-03 02:43:55.029476438 +0000 UTC m=+1.735245999 container remove 1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mclean, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:43:55 compute-0 systemd[1]: libpod-conmon-1b60b5d07582b450902f5d064a546a62f6a6877d741f90a21fb75c980ebc2413.scope: Deactivated successfully.
Dec  3 02:43:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.286354387 +0000 UTC m=+0.100137777 container create d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  3 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.249703243 +0000 UTC m=+0.063486693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:43:56 compute-0 systemd[1]: Started libpod-conmon-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope.
Dec  3 02:43:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.444470399 +0000 UTC m=+0.258253849 container init d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.46043964 +0000 UTC m=+0.274223030 container start d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.468205229 +0000 UTC m=+0.281988769 container attach d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec  3 02:43:56 compute-0 recursing_burnell[492018]: 167 167
Dec  3 02:43:56 compute-0 systemd[1]: libpod-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope: Deactivated successfully.
Dec  3 02:43:56 compute-0 conmon[492018]: conmon d844a8ad48a0ed51046f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope/container/memory.events
Dec  3 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.474517597 +0000 UTC m=+0.288300977 container died d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-07502613c02a388831c07cd57c328e93f7c55a5145f1d0f0916dafd1c4eeb39b-merged.mount: Deactivated successfully.
Dec  3 02:43:56 compute-0 podman[492003]: 2025-12-03 02:43:56.547078004 +0000 UTC m=+0.360861364 container remove d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:43:56 compute-0 systemd[1]: libpod-conmon-d844a8ad48a0ed51046f2f28200d3c0353cf3501bb8c0aa65ec00c6734ba2c90.scope: Deactivated successfully.
Dec  3 02:43:56 compute-0 podman[492041]: 2025-12-03 02:43:56.817980049 +0000 UTC m=+0.091956686 container create 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  3 02:43:56 compute-0 podman[492041]: 2025-12-03 02:43:56.784223617 +0000 UTC m=+0.058200294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:43:56 compute-0 systemd[1]: Started libpod-conmon-1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136.scope.
Dec  3 02:43:56 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.012765416 +0000 UTC m=+0.286742103 container init 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  3 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.041808816 +0000 UTC m=+0.315785443 container start 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.0497683 +0000 UTC m=+0.323744977 container attach 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  3 02:43:57 compute-0 nova_compute[351485]: 2025-12-03 02:43:57.315 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:57 compute-0 awesome_poincare[492056]: {
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:    "0": [
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:        {
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "devices": [
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "/dev/loop3"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            ],
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_name": "ceph_lv0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_size": "21470642176",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "name": "ceph_lv0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "tags": {
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cluster_name": "ceph",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.crush_device_class": "",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.encrypted": "0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osd_id": "0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.type": "block",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.vdo": "0"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            },
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "type": "block",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "vg_name": "ceph_vg0"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:        }
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:    ],
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:    "1": [
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:        {
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "devices": [
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "/dev/loop4"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            ],
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_name": "ceph_lv1",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_size": "21470642176",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "name": "ceph_lv1",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "tags": {
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cluster_name": "ceph",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.crush_device_class": "",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.encrypted": "0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osd_id": "1",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.type": "block",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.vdo": "0"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            },
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "type": "block",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "vg_name": "ceph_vg1"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:        }
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:    ],
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:    "2": [
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:        {
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "devices": [
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "/dev/loop5"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            ],
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_name": "ceph_lv2",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_size": "21470642176",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "name": "ceph_lv2",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "tags": {
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.cluster_name": "ceph",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.crush_device_class": "",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.encrypted": "0",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osd_id": "2",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.type": "block",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:                "ceph.vdo": "0"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            },
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "type": "block",
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:            "vg_name": "ceph_vg2"
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:        }
Dec  3 02:43:57 compute-0 awesome_poincare[492056]:    ]
Dec  3 02:43:57 compute-0 awesome_poincare[492056]: }
Dec  3 02:43:57 compute-0 systemd[1]: libpod-1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136.scope: Deactivated successfully.
Dec  3 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.844308552 +0000 UTC m=+1.118285189 container died 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4a583eb269232d1989fa8c06087fc2715ff83174e6ff3d10c6d7159ca50b87c-merged.mount: Deactivated successfully.
Dec  3 02:43:57 compute-0 podman[492041]: 2025-12-03 02:43:57.954655616 +0000 UTC m=+1.228632253 container remove 1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:43:57 compute-0 systemd[1]: libpod-conmon-1d2dc362b16e171d084d6525a3189ca4c5a657a8f90a576316b167a7b8de5136.scope: Deactivated successfully.
Dec  3 02:43:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:43:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:43:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.125468546 +0000 UTC m=+0.064551323 container create 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 02:43:59 compute-0 systemd[1]: Started libpod-conmon-955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b.scope.
Dec  3 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.103389942 +0000 UTC m=+0.042472769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:43:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.269980854 +0000 UTC m=+0.209063661 container init 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  3 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.288798695 +0000 UTC m=+0.227881512 container start 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.296254805 +0000 UTC m=+0.235337622 container attach 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:43:59 compute-0 great_haibt[492235]: 167 167
Dec  3 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.301323428 +0000 UTC m=+0.240406235 container died 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  3 02:43:59 compute-0 systemd[1]: libpod-955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b.scope: Deactivated successfully.
Dec  3 02:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f0d8b9ddaa572c76f761dd7592eb075aaf64297b3abe00b626799970a7e0698-merged.mount: Deactivated successfully.
Dec  3 02:43:59 compute-0 podman[492220]: 2025-12-03 02:43:59.375716688 +0000 UTC m=+0.314799505 container remove 955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:43:59 compute-0 systemd[1]: libpod-conmon-955740e4e28513e487f8919c2b45782d1d5ea17b421865de945f27dad45f135b.scope: Deactivated successfully.
Dec  3 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.640812299 +0000 UTC m=+0.082101728 container create 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:43:59.681 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:43:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:43:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:43:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.605692227 +0000 UTC m=+0.046981666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:43:59 compute-0 nova_compute[351485]: 2025-12-03 02:43:59.712 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:43:59 compute-0 systemd[1]: Started libpod-conmon-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope.
Dec  3 02:43:59 compute-0 podman[158098]: time="2025-12-03T02:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:43:59 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.804686923 +0000 UTC m=+0.245976392 container init 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.836746258 +0000 UTC m=+0.278035697 container start 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  3 02:43:59 compute-0 podman[492261]: 2025-12-03 02:43:59.844676212 +0000 UTC m=+0.285965681 container attach 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:43:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44147 "" "Go-http-client/1.1"
Dec  3 02:43:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8625 "" "Go-http-client/1.1"
Dec  3 02:44:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]: {
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "osd_id": 2,
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "type": "bluestore"
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:    },
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "osd_id": 1,
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "type": "bluestore"
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:    },
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "osd_id": 0,
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:        "type": "bluestore"
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]:    }
Dec  3 02:44:01 compute-0 vigorous_lumiere[492277]: }
Dec  3 02:44:01 compute-0 systemd[1]: libpod-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope: Deactivated successfully.
Dec  3 02:44:01 compute-0 podman[492261]: 2025-12-03 02:44:01.052297561 +0000 UTC m=+1.493586990 container died 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:44:01 compute-0 systemd[1]: libpod-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope: Consumed 1.217s CPU time.
Dec  3 02:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-565d59719158a08036a6988132827fdee52ba0667968a2ab90ecaff2491d7e2f-merged.mount: Deactivated successfully.
Dec  3 02:44:01 compute-0 podman[492261]: 2025-12-03 02:44:01.167218254 +0000 UTC m=+1.608507653 container remove 2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lumiere, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:44:01 compute-0 systemd[1]: libpod-conmon-2dde7b8e399409409416498473d94d3295ec4a7d510977c6104dc4d46ca5cac6.scope: Deactivated successfully.
Dec  3 02:44:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:44:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:44:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:44:01 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:44:01 compute-0 podman[492311]: 2025-12-03 02:44:01.256167014 +0000 UTC m=+0.152469754 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:44:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev d56e715c-d9b4-4351-8152-8789b3f16756 does not exist
Dec  3 02:44:01 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 89cf344b-5911-4c61-bc81-bba6f352f236 does not exist
Dec  3 02:44:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:44:01 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:44:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:44:01 compute-0 openstack_network_exporter[368278]: ERROR   02:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:44:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:44:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:02 compute-0 nova_compute[351485]: 2025-12-03 02:44:02.319 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:04 compute-0 nova_compute[351485]: 2025-12-03 02:44:04.717 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:06 compute-0 podman[492393]: 2025-12-03 02:44:06.87723211 +0000 UTC m=+0.108353439 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4)
Dec  3 02:44:06 compute-0 podman[492391]: 2025-12-03 02:44:06.883810155 +0000 UTC m=+0.122067236 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  3 02:44:06 compute-0 podman[492399]: 2025-12-03 02:44:06.904860269 +0000 UTC m=+0.118609088 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 02:44:06 compute-0 podman[492392]: 2025-12-03 02:44:06.906012062 +0000 UTC m=+0.134798115 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:44:06 compute-0 podman[492390]: 2025-12-03 02:44:06.938218911 +0000 UTC m=+0.183962133 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 02:44:07 compute-0 nova_compute[351485]: 2025-12-03 02:44:07.322 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:08 compute-0 nova_compute[351485]: 2025-12-03 02:44:08.324 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:09 compute-0 nova_compute[351485]: 2025-12-03 02:44:09.719 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:12 compute-0 nova_compute[351485]: 2025-12-03 02:44:12.325 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:14 compute-0 nova_compute[351485]: 2025-12-03 02:44:14.722 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:17 compute-0 nova_compute[351485]: 2025-12-03 02:44:17.328 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:19 compute-0 nova_compute[351485]: 2025-12-03 02:44:19.724 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:20 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:21 compute-0 podman[492492]: 2025-12-03 02:44:21.856982853 +0000 UTC m=+0.107853815 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 02:44:21 compute-0 podman[492493]: 2025-12-03 02:44:21.886769263 +0000 UTC m=+0.130363850 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 02:44:21 compute-0 podman[492494]: 2025-12-03 02:44:21.910896044 +0000 UTC m=+0.144864289 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  3 02:44:22 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:22 compute-0 nova_compute[351485]: 2025-12-03 02:44:22.331 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:23 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:24 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:24 compute-0 nova_compute[351485]: 2025-12-03 02:44:24.728 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:26 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:27 compute-0 nova_compute[351485]: 2025-12-03 02:44:27.334 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.421212) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867421274, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1175, "num_deletes": 251, "total_data_size": 1770347, "memory_usage": 1801664, "flush_reason": "Manual Compaction"}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867436813, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1742446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54434, "largest_seqno": 55608, "table_properties": {"data_size": 1736756, "index_size": 3084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11976, "raw_average_key_size": 19, "raw_value_size": 1725386, "raw_average_value_size": 2851, "num_data_blocks": 138, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764729749, "oldest_key_time": 1764729749, "file_creation_time": 1764729867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 15876 microseconds, and 8839 cpu microseconds.
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.437088) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1742446 bytes OK
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.437288) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.439405) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.439429) EVENT_LOG_v1 {"time_micros": 1764729867439422, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.439450) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1764967, prev total WAL file size 1764967, number of live WAL files 2.
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.444299) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1701KB)], [131(7438KB)]
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867444405, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 9359361, "oldest_snapshot_seqno": -1}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 6832 keys, 7658713 bytes, temperature: kUnknown
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867500966, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 7658713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7618204, "index_size": 22348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 179286, "raw_average_key_size": 26, "raw_value_size": 7499630, "raw_average_value_size": 1097, "num_data_blocks": 877, "num_entries": 6832, "num_filter_entries": 6832, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764724656, "oldest_key_time": 0, "file_creation_time": 1764729867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "934233b3-95a6-4219-87ec-c9177c468bdc", "db_session_id": "8J96JYHVNMM2V9HBWT3Y", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.501282) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 7658713 bytes
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.503921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 135.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(9.8) write-amplify(4.4) OK, records in: 7346, records dropped: 514 output_compression: NoCompression
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.503951) EVENT_LOG_v1 {"time_micros": 1764729867503937, "job": 80, "event": "compaction_finished", "compaction_time_micros": 56653, "compaction_time_cpu_micros": 32568, "output_level": 6, "num_output_files": 1, "total_output_size": 7658713, "num_input_records": 7346, "num_output_records": 6832, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867504710, "job": 80, "event": "table_file_deletion", "file_number": 133}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764729867507005, "job": 80, "event": "table_file_deletion", "file_number": 131}
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.444083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:44:27 compute-0 ceph-mon[192821]: rocksdb: (Original Log Time 2025/12/03-02:44:27.507345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Optimize plan auto_2025-12-03_02:44:28
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] do_upmap
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] pools ['images', 'default.rgw.log', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Dec  3 02:44:28 compute-0 ceph-mgr[193109]: [balancer INFO root] prepared 0/10 changes
Dec  3 02:44:28 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 02:44:29 compute-0 ceph-mgr[193109]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 02:44:29 compute-0 nova_compute[351485]: 2025-12-03 02:44:29.732 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:29 compute-0 podman[158098]: time="2025-12-03T02:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:44:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:44:29 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec  3 02:44:30 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:44:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:44:31 compute-0 openstack_network_exporter[368278]: ERROR   02:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:44:31 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:44:31 compute-0 podman[492555]: 2025-12-03 02:44:31.867704382 +0000 UTC m=+0.120693677 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible)
Dec  3 02:44:32 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:32 compute-0 nova_compute[351485]: 2025-12-03 02:44:32.338 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:33 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:34 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:34 compute-0 nova_compute[351485]: 2025-12-03 02:44:34.735 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:36 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:37 compute-0 nova_compute[351485]: 2025-12-03 02:44:37.341 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:37 compute-0 nova_compute[351485]: 2025-12-03 02:44:37.601 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:37 compute-0 podman[492573]: 2025-12-03 02:44:37.893268682 +0000 UTC m=+0.128580998 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec  3 02:44:37 compute-0 podman[492574]: 2025-12-03 02:44:37.894171778 +0000 UTC m=+0.128058353 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  3 02:44:37 compute-0 podman[492583]: 2025-12-03 02:44:37.902097011 +0000 UTC m=+0.120011106 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 02:44:37 compute-0 podman[492572]: 2025-12-03 02:44:37.911317092 +0000 UTC m=+0.165393447 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 02:44:37 compute-0 podman[492580]: 2025-12-03 02:44:37.913240336 +0000 UTC m=+0.134449144 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, container_name=kepler, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543)
Dec  3 02:44:38 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:38 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 02:44:39 compute-0 ceph-mgr[193109]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  3 02:44:39 compute-0 nova_compute[351485]: 2025-12-03 02:44:39.738 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:40 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:42 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:42 compute-0 nova_compute[351485]: 2025-12-03 02:44:42.344 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.618 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.619 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.619 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 02:44:43 compute-0 nova_compute[351485]: 2025-12-03 02:44:43.620 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:44:43 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:44 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:44:44 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499390691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:44:44 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.123 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.655 351492 WARNING nova.virt.libvirt.driver [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.657 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3975MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.657 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.658 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.741 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.763 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.763 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 02:44:44 compute-0 nova_compute[351485]: 2025-12-03 02:44:44.789 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 02:44:45 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  3 02:44:45 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835206597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  3 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.347 351492 DEBUG oslo_concurrency.processutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.360 351492 DEBUG nova.compute.provider_tree [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed in ProviderTree for provider: 107397d2-51bc-4a03-bce4-7cd69319cf05 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.383 351492 DEBUG nova.scheduler.client.report [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Inventory has not changed for provider 107397d2-51bc-4a03-bce4-7cd69319cf05 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.386 351492 DEBUG nova.compute.resource_tracker [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 02:44:45 compute-0 nova_compute[351485]: 2025-12-03 02:44:45.387 351492 DEBUG oslo_concurrency.lockutils [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:44:46 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  3 02:44:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647278499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  3 02:44:47 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  3 02:44:47 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647278499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  3 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.347 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.388 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.388 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.389 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.410 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 02:44:47 compute-0 nova_compute[351485]: 2025-12-03 02:44:47.410 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:48 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:48 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:49 compute-0 nova_compute[351485]: 2025-12-03 02:44:49.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:49 compute-0 nova_compute[351485]: 2025-12-03 02:44:49.744 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:50 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:50 compute-0 nova_compute[351485]: 2025-12-03 02:44:50.577 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:51 compute-0 nova_compute[351485]: 2025-12-03 02:44:51.571 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:52 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:52 compute-0 nova_compute[351485]: 2025-12-03 02:44:52.351 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:52 compute-0 podman[492723]: 2025-12-03 02:44:52.858045602 +0000 UTC m=+0.089859386 container health_status 82ba4b7a5ca8d6c36a7b846dae3fa1559e53e235fe02ebebd300413e6970e195 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  3 02:44:52 compute-0 podman[492722]: 2025-12-03 02:44:52.859348149 +0000 UTC m=+0.106287590 container health_status 7418403b3c8b6908f3c626bef215f3df6257704b48b04b3c1e17f90aa0284264 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec  3 02:44:52 compute-0 podman[492721]: 2025-12-03 02:44:52.860995296 +0000 UTC m=+0.105726155 container health_status 5241224739fefc386c832233f6e8464dd26d0b70152ab721bd54a3c6bede8ec6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 02:44:53 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:54 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:54 compute-0 nova_compute[351485]: 2025-12-03 02:44:54.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:54 compute-0 nova_compute[351485]: 2025-12-03 02:44:54.747 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:55 compute-0 nova_compute[351485]: 2025-12-03 02:44:55.576 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:55 compute-0 nova_compute[351485]: 2025-12-03 02:44:55.577 351492 DEBUG nova.compute.manager [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 02:44:55 compute-0 systemd-logind[800]: New session 66 of user zuul.
Dec  3 02:44:55 compute-0 systemd[1]: Started Session 66 of User zuul.
Dec  3 02:44:56 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:56 compute-0 nova_compute[351485]: 2025-12-03 02:44:56.572 351492 DEBUG oslo_service.periodic_task [None req-78941cc0-cd29-4443-a4ad-6a9c99258292 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 02:44:57 compute-0 nova_compute[351485]: 2025-12-03 02:44:57.354 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:58 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 02:44:58 compute-0 ceph-mgr[193109]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 02:44:58 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:44:59.682 288528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 02:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:44:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 02:44:59 compute-0 ovn_metadata_agent[288523]: 2025-12-03 02:44:59.683 288528 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 02:44:59 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15881 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:44:59 compute-0 podman[158098]: time="2025-12-03T02:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  3 02:44:59 compute-0 nova_compute[351485]: 2025-12-03 02:44:59.751 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:44:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42579 "" "Go-http-client/1.1"
Dec  3 02:44:59 compute-0 podman[158098]: @ - - [03/Dec/2025:02:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8206 "" "Go-http-client/1.1"
Dec  3 02:45:00 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:00 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15883 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:01 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  3 02:45:01 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002863928' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  3 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  3 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  3 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  3 02:45:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:45:01 compute-0 openstack_network_exporter[368278]: ERROR   02:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  3 02:45:01 compute-0 openstack_network_exporter[368278]: 
Dec  3 02:45:02 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:02 compute-0 podman[493109]: 2025-12-03 02:45:02.126673831 +0000 UTC m=+0.113829213 container health_status ba9453e98060b6fb4b6671fa31ac3e5eaced83d73342a43b1d95312598ed9e92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 02:45:02 compute-0 nova_compute[351485]: 2025-12-03 02:45:02.360 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:45:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:45:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:02 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:45:02 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:03 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  3 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  3 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev aabf4f6b-36b0-4f52-86d1-f764beaa541e does not exist
Dec  3 02:45:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 5f8d11b8-7a48-403c-a85f-b2376ae83ce8 does not exist
Dec  3 02:45:03 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev a2caed12-b76a-49dc-ba7d-1c5683b32be3 does not exist
Dec  3 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  3 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  3 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  3 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:45:03 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:45:03 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:45:04 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  3 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:04 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  3 02:45:04 compute-0 ovs-vsctl[493453]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  3 02:45:04 compute-0 nova_compute[351485]: 2025-12-03 02:45:04.755 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:45:04 compute-0 podman[493481]: 2025-12-03 02:45:04.841945375 +0000 UTC m=+0.081591753 container create bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:45:04 compute-0 podman[493481]: 2025-12-03 02:45:04.81199695 +0000 UTC m=+0.051643328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:45:04 compute-0 systemd[1]: Started libpod-conmon-bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d.scope.
Dec  3 02:45:04 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:45:04 compute-0 podman[493481]: 2025-12-03 02:45:04.989218031 +0000 UTC m=+0.228864389 container init bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.00689053 +0000 UTC m=+0.246536868 container start bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.011819109 +0000 UTC m=+0.251465467 container attach bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  3 02:45:05 compute-0 cranky_neumann[493506]: 167 167
Dec  3 02:45:05 compute-0 systemd[1]: libpod-bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d.scope: Deactivated successfully.
Dec  3 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.025051592 +0000 UTC m=+0.264697970 container died bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce5f6db52bb7d115df883225cd28654992e0d9e954db5d7c6752954709733bba-merged.mount: Deactivated successfully.
Dec  3 02:45:05 compute-0 podman[493481]: 2025-12-03 02:45:05.091301022 +0000 UTC m=+0.330947370 container remove bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 02:45:05 compute-0 systemd[1]: libpod-conmon-bf734ce80c10e5eed0a97441ed8dac249b8b2e0e2e162123dc2a6a01e06fbf1d.scope: Deactivated successfully.
Dec  3 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.345596228 +0000 UTC m=+0.074342469 container create 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.312437102 +0000 UTC m=+0.041183443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:45:05 compute-0 systemd[1]: Started libpod-conmon-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope.
Dec  3 02:45:05 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.495823358 +0000 UTC m=+0.224569599 container init 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec  3 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.51400271 +0000 UTC m=+0.242748981 container start 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 02:45:05 compute-0 podman[493543]: 2025-12-03 02:45:05.522821089 +0000 UTC m=+0.251567350 container attach 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:45:06 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  3 02:45:06 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:06 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  3 02:45:06 compute-0 virtqemud[154511]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  3 02:45:06 compute-0 vibrant_lalande[493565]: --> passed data devices: 0 physical, 3 LVM
Dec  3 02:45:06 compute-0 vibrant_lalande[493565]: --> relative data size: 1.0
Dec  3 02:45:06 compute-0 vibrant_lalande[493565]: --> All data devices are unavailable
Dec  3 02:45:06 compute-0 systemd[1]: libpod-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope: Deactivated successfully.
Dec  3 02:45:06 compute-0 systemd[1]: libpod-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope: Consumed 1.196s CPU time.
Dec  3 02:45:06 compute-0 podman[493774]: 2025-12-03 02:45:06.838567479 +0000 UTC m=+0.039685491 container died 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-95922a1bb0e75f034591bf6355eb35fb81211620d4e6d7e25ceb22cccf3788df-merged.mount: Deactivated successfully.
Dec  3 02:45:06 compute-0 podman[493774]: 2025-12-03 02:45:06.90594719 +0000 UTC m=+0.107065202 container remove 6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  3 02:45:06 compute-0 systemd[1]: libpod-conmon-6c31113b57bb9585433da53199d841e62fa91385f7a983bbf692c0ab080e339c.scope: Deactivated successfully.
Dec  3 02:45:06 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: cache status {prefix=cache status} (starting...)
Dec  3 02:45:07 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: client ls {prefix=client ls} (starting...)
Dec  3 02:45:07 compute-0 lvm[493923]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 02:45:07 compute-0 lvm[493923]: VG ceph_vg0 finished
Dec  3 02:45:07 compute-0 lvm[493922]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 02:45:07 compute-0 lvm[493922]: VG ceph_vg2 finished
Dec  3 02:45:07 compute-0 lvm[494002]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 02:45:07 compute-0 lvm[494002]: VG ceph_vg1 finished
Dec  3 02:45:07 compute-0 nova_compute[351485]: 2025-12-03 02:45:07.364 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.719786237 +0000 UTC m=+0.048462989 container create 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  3 02:45:07 compute-0 systemd[1]: Started libpod-conmon-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope.
Dec  3 02:45:07 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:45:07 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: damage ls {prefix=damage ls} (starting...)
Dec  3 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.701167211 +0000 UTC m=+0.029843993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.814685625 +0000 UTC m=+0.143362397 container init 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  3 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.82798196 +0000 UTC m=+0.156658712 container start 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.832783876 +0000 UTC m=+0.161460658 container attach 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 02:45:07 compute-0 determined_moser[494166]: 167 167
Dec  3 02:45:07 compute-0 systemd[1]: libpod-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope: Deactivated successfully.
Dec  3 02:45:07 compute-0 conmon[494166]: conmon 1288a43273fe22906e0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope/container/memory.events
Dec  3 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.838230959 +0000 UTC m=+0.166907711 container died 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e7ae8a4602f081740d5af799d098639a504f5d8f2b0b75c0245a8f81ca10d95-merged.mount: Deactivated successfully.
Dec  3 02:45:07 compute-0 podman[494124]: 2025-12-03 02:45:07.893325874 +0000 UTC m=+0.222002626 container remove 1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moser, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 02:45:07 compute-0 systemd[1]: libpod-conmon-1288a43273fe22906e0a7fbad5b299db9501e92786c454296d917994e312c6f4.scope: Deactivated successfully.
Dec  3 02:45:07 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump loads {prefix=dump loads} (starting...)
Dec  3 02:45:07 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15887 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  3 02:45:08 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.14443993 +0000 UTC m=+0.114251525 container create 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.063687282 +0000 UTC m=+0.033498907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:45:08 compute-0 systemd[1]: Started libpod-conmon-97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9.scope.
Dec  3 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  3 02:45:08 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.302218523 +0000 UTC m=+0.272030148 container init 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  3 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.316592669 +0000 UTC m=+0.286404274 container start 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:45:08 compute-0 podman[494216]: 2025-12-03 02:45:08.327100655 +0000 UTC m=+0.296912260 container attach 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:45:08 compute-0 podman[494249]: 2025-12-03 02:45:08.357949966 +0000 UTC m=+0.152134384 container health_status 9b7ff537fe9d019bd50a042e0fc452281c5021d82297af66dad4d45977ef43df (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  3 02:45:08 compute-0 podman[494252]: 2025-12-03 02:45:08.360801376 +0000 UTC m=+0.151467395 container health_status df055b295492edbc6e645bbc9e5f08b1ab35a8bdbb4ce0876f345b2ed9d61630 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec  3 02:45:08 compute-0 podman[494248]: 2025-12-03 02:45:08.360838097 +0000 UTC m=+0.158159134 container health_status 945f216527938e724ddbf52beec3ea46378ce9b2244372e5cbff5384a36ceb6b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, version=9.6, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7)
Dec  3 02:45:08 compute-0 podman[494250]: 2025-12-03 02:45:08.367117854 +0000 UTC m=+0.162541978 container health_status c095e31a3195bbbfc2a5d656eec7cd89bcdb2dd39c64b3ef6b4d452615181fd6 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  3 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  3 02:45:08 compute-0 podman[494246]: 2025-12-03 02:45:08.400766314 +0000 UTC m=+0.203950706 container health_status 926ca428581c31f9e3976e1085388766499d695aa1b162f7c3784ef5cd7f248f (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  3 02:45:08 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15890 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  3 02:45:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  3 02:45:08 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735022238' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  3 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  3 02:45:08 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:45:08 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  3 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  3 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918308904' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  3 02:45:09 compute-0 happy_pike[494308]: {
Dec  3 02:45:09 compute-0 happy_pike[494308]:    "0": [
Dec  3 02:45:09 compute-0 happy_pike[494308]:        {
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "devices": [
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "/dev/loop3"
Dec  3 02:45:09 compute-0 happy_pike[494308]:            ],
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_name": "ceph_lv0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_size": "21470642176",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=551e0f4a-0b7e-47cf-9522-b82f94d4038c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "name": "ceph_lv0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "tags": {
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.block_uuid": "lo1haU-8w4u-EhMy-kEBk-vc3V-CRme-e9ZJki",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cluster_name": "ceph",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.crush_device_class": "",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.encrypted": "0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osd_fsid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osd_id": "0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.type": "block",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.vdo": "0"
Dec  3 02:45:09 compute-0 happy_pike[494308]:            },
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "type": "block",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "vg_name": "ceph_vg0"
Dec  3 02:45:09 compute-0 happy_pike[494308]:        }
Dec  3 02:45:09 compute-0 happy_pike[494308]:    ],
Dec  3 02:45:09 compute-0 happy_pike[494308]:    "1": [
Dec  3 02:45:09 compute-0 happy_pike[494308]:        {
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "devices": [
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "/dev/loop4"
Dec  3 02:45:09 compute-0 happy_pike[494308]:            ],
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_name": "ceph_lv1",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_size": "21470642176",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=38b78a6e-cf5e-4c74-a51c-1bb51cf53a18,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "name": "ceph_lv1",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "tags": {
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.block_uuid": "uz5j0q-RgMF-msDM-8VJE-Dw6v-fWGp-54o4Qb",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cluster_name": "ceph",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.crush_device_class": "",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.encrypted": "0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osd_fsid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osd_id": "1",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.type": "block",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.vdo": "0"
Dec  3 02:45:09 compute-0 happy_pike[494308]:            },
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "type": "block",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "vg_name": "ceph_vg1"
Dec  3 02:45:09 compute-0 happy_pike[494308]:        }
Dec  3 02:45:09 compute-0 happy_pike[494308]:    ],
Dec  3 02:45:09 compute-0 happy_pike[494308]:    "2": [
Dec  3 02:45:09 compute-0 happy_pike[494308]:        {
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "devices": [
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "/dev/loop5"
Dec  3 02:45:09 compute-0 happy_pike[494308]:            ],
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_name": "ceph_lv2",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_size": "21470642176",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=3765feb2-36f8-5b86-b74c-64e9221f9c4c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2ebf7eac-7883-4286-84a2-653e10a1ae8a,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "lv_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "name": "ceph_lv2",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "tags": {
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.block_uuid": "r8DV4F-gOvM-vLNW-WoDd-IbH8-lxcR-o2AVh8",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cephx_lockbox_secret": "",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cluster_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.cluster_name": "ceph",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.crush_device_class": "",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.encrypted": "0",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osd_fsid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osd_id": "2",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.type": "block",
Dec  3 02:45:09 compute-0 happy_pike[494308]:                "ceph.vdo": "0"
Dec  3 02:45:09 compute-0 happy_pike[494308]:            },
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "type": "block",
Dec  3 02:45:09 compute-0 happy_pike[494308]:            "vg_name": "ceph_vg2"
Dec  3 02:45:09 compute-0 happy_pike[494308]:        }
Dec  3 02:45:09 compute-0 happy_pike[494308]:    ]
Dec  3 02:45:09 compute-0 happy_pike[494308]: }
Dec  3 02:45:09 compute-0 podman[494216]: 2025-12-03 02:45:09.247233021 +0000 UTC m=+1.217044646 container died 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 02:45:09 compute-0 systemd[1]: libpod-97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9.scope: Deactivated successfully.
Dec  3 02:45:09 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: ops {prefix=ops} (starting...)
Dec  3 02:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-03233762f38b7ab7dce5fbb8145625d348015ff654a1344ed3fb5d5654336fae-merged.mount: Deactivated successfully.
Dec  3 02:45:09 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15897 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:09 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:45:09.294+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:45:09 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:45:09 compute-0 podman[494216]: 2025-12-03 02:45:09.329270696 +0000 UTC m=+1.299082311 container remove 97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_pike, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  3 02:45:09 compute-0 systemd[1]: libpod-conmon-97db9fddeda2e521c9fe11e4e17452591f36477dbb5a75a27c8871626948a7d9.scope: Deactivated successfully.
Dec  3 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  3 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606138239' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  3 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  3 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626519256' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  3 02:45:09 compute-0 nova_compute[351485]: 2025-12-03 02:45:09.757 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:45:09 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  3 02:45:09 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586845474' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  3 02:45:09 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: session ls {prefix=session ls} (starting...)
Dec  3 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.079035845 +0000 UTC m=+0.063772001 container create 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:45:10 compute-0 ceph-mds[220488]: mds.cephfs.compute-0.bgmlsq asok_command: status {prefix=status} (starting...)
Dec  3 02:45:10 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:10 compute-0 systemd[1]: Started libpod-conmon-96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb.scope.
Dec  3 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.06079132 +0000 UTC m=+0.045527506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:45:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.188360419 +0000 UTC m=+0.173096605 container init 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 02:45:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 02:45:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529839193' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.198880496 +0000 UTC m=+0.183616672 container start 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 02:45:10 compute-0 angry_babbage[494767]: 167 167
Dec  3 02:45:10 compute-0 systemd[1]: libpod-96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb.scope: Deactivated successfully.
Dec  3 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.204751521 +0000 UTC m=+0.189487697 container attach 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.20505891 +0000 UTC m=+0.189795066 container died 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 02:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed10de8fa22999c235a6ed768ca793d3a2873e1d7bf3d097344507ad9dded5e7-merged.mount: Deactivated successfully.
Dec  3 02:45:10 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15907 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:10 compute-0 podman[494732]: 2025-12-03 02:45:10.255927566 +0000 UTC m=+0.240663722 container remove 96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  3 02:45:10 compute-0 systemd[1]: libpod-conmon-96c5558e901e239fbd749a0d8e50910add9cab7060060179677e82951ba062eb.scope: Deactivated successfully.
Dec  3 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.455269961 +0000 UTC m=+0.063315958 container create f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 02:45:10 compute-0 systemd[1]: Started libpod-conmon-f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191.scope.
Dec  3 02:45:10 compute-0 systemd[1]: Started libcrun container.
Dec  3 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.436727728 +0000 UTC m=+0.044773745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  3 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.545496377 +0000 UTC m=+0.153542394 container init f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  3 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.559293557 +0000 UTC m=+0.167339554 container start f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  3 02:45:10 compute-0 podman[494824]: 2025-12-03 02:45:10.563034672 +0000 UTC m=+0.171080669 container attach f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  3 02:45:10 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 02:45:10 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357595830' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 02:45:10 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15911 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121455930' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  3 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960436668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]: {
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:    "2ebf7eac-7883-4286-84a2-653e10a1ae8a": {
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "osd_id": 2,
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "osd_uuid": "2ebf7eac-7883-4286-84a2-653e10a1ae8a",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "type": "bluestore"
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:    },
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:    "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18": {
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "osd_id": 1,
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "osd_uuid": "38b78a6e-cf5e-4c74-a51c-1bb51cf53a18",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "type": "bluestore"
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:    },
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:    "551e0f4a-0b7e-47cf-9522-b82f94d4038c": {
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "ceph_fsid": "3765feb2-36f8-5b86-b74c-64e9221f9c4c",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "osd_id": 0,
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "osd_uuid": "551e0f4a-0b7e-47cf-9522-b82f94d4038c",
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:        "type": "bluestore"
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]:    }
Dec  3 02:45:11 compute-0 xenodochial_euler[494860]: }
Dec  3 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  3 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236205888' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  3 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914263576' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 02:45:11 compute-0 systemd[1]: libpod-f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191.scope: Deactivated successfully.
Dec  3 02:45:11 compute-0 podman[494824]: 2025-12-03 02:45:11.556183649 +0000 UTC m=+1.164229646 container died f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  3 02:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6dc679fc79c01466f1e0388f77e57a8d8d951a951d5180cce3ed936a78d5da8-merged.mount: Deactivated successfully.
Dec  3 02:45:11 compute-0 podman[494824]: 2025-12-03 02:45:11.626054611 +0000 UTC m=+1.234100608 container remove f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 02:45:11 compute-0 systemd[1]: libpod-conmon-f780c8c773605a779e2c9633edd2d4f72243133760f08e7ee3d3050d1a83b191.scope: Deactivated successfully.
Dec  3 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  3 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  3 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 40376fc5-d14d-476c-9cee-0d3f406aec80 does not exist
Dec  3 02:45:11 compute-0 ceph-mgr[193109]: [progress WARNING root] complete: ev 4e209fec-10ec-4178-a01b-d18d942e7b0f does not exist
Dec  3 02:45:11 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15923 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:11 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 02:45:11 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:45:11.971+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 02:45:11 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  3 02:45:11 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862237977' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  3 02:45:12 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:12 compute-0 ceph-mon[192821]: from='mgr.14130 192.168.122.100:0/1316353787' entity='mgr.compute-0.rysove' 
Dec  3 02:45:12 compute-0 nova_compute[351485]: 2025-12-03 02:45:12.368 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:45:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  3 02:45:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729441193' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  3 02:45:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 02:45:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2477747180' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 02:45:12 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15929 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:12 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  3 02:45:12 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470977912' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  3 02:45:13 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15933 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  3 02:45:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621837691' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  3 02:45:13 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:45:13 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  3 02:45:13 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439200539' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194740 data_alloc: 218103808 data_used: 9998336
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x1901a08/0x19ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec8c00 session 0x558b8624b0e0
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b85ec9400 session 0x558b85fc3c20
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b84a92800 session 0x558b86376d20
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 31891456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.205551147s of 100.784317017s, submitted: 90
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b83a94f00
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126457 data_alloc: 218103808 data_used: 6328320
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96739328 unmapped: 33947648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.514204025s of 18.552843094s, submitted: 8
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c47400 session 0x558b85fc23c0
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b843b6000 session 0x558b85bbdc20
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83c46c00 session 0x558b862aa3c0
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96763904 unmapped: 33923072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c29f8/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 ms_handle_reset con 0x558b83a73000 session 0x558b8634be00
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:13 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:13 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:13 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49c5/0x98e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994002 data_alloc: 218103808 data_used: 36864
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 38674432 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.285675049s of 85.578956604s, submitted: 49
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92102656 unmapped: 38584320 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb0ef000/0x0/0x4ffc00000, data 0x8c49e8/0x98f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 38486016 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 134 ms_handle_reset con 0x558b83c46c00 session 0x558b85fc3860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 92233728 unmapped: 38453248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91750400 unmapped: 38936576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86376960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 38912000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095506 data_alloc: 218103808 data_used: 53248
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b84927860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8311e3c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b8625a960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 38903808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa475000/0x0/0x4ffc00000, data 0x1538138/0x1608000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85278000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862841e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 53.474491119s of 53.753597260s, submitted: 32
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 35282944 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b843b6000 session 0x558b86284f00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b852443c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b85244960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b862fa780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b849b8f00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166167 data_alloc: 218103808 data_used: 4714496
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 34283520 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86376d20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b86377e00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83dbb680
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8625a960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b84927860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a92800 session 0x558b85fc3c20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175957 data_alloc: 218103808 data_used: 4714496
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 34152448 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b85fc23c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634be00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d26000/0x0/0x4ffc00000, data 0x1c861aa/0x1d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbdc20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b83a94f00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624b0e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b84a2cd20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b862841e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86284f00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 34201600 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9592000/0x0/0x4ffc00000, data 0x20081dd/0x20dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b86376000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96509952 unmapped: 34177024 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec9c00 session 0x558b86376b40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 34193408 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b863763c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.407323837s of 10.792876244s, submitted: 47
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231863 data_alloc: 218103808 data_used: 4722688
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b862fbe00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b86261e00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b844090e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138000 session 0x558b844225a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b8625b4a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849b92c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x229020f/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 33865728 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634b860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96829440 unmapped: 33857536 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262169 data_alloc: 218103808 data_used: 8060928
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 96804864 unmapped: 33882112 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b862faf00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97984512 unmapped: 32702464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b862fab40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9307000/0x0/0x4ffc00000, data 0x2290232/0x2367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 97951744 unmapped: 32735232 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83a73000 session 0x558b83ca2000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b83ca43c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291687 data_alloc: 234881024 data_used: 12001280
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 32309248 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 32292864 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9306000/0x0/0x4ffc00000, data 0x2290242/0x2368000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 30040064 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317927 data_alloc: 234881024 data_used: 15679488
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 30023680 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8800 session 0x558b8624a780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ec8c00 session 0x558b862fb0e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 29679616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.547910690s of 18.724098206s, submitted: 24
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139400 session 0x558b83dbb0e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 28319744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318572 data_alloc: 234881024 data_used: 17334272
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 28237824 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b8634a5a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f93d2000/0x0/0x4ffc00000, data 0x21c5210/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b867461e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 29696000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283015 data_alloc: 234881024 data_used: 15118336
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9632000/0x0/0x4ffc00000, data 0x1f67200/0x203c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 29687808 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.113079071s of 19.223537445s, submitted: 17
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 21757952 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f85a5000/0x0/0x4ffc00000, data 0x2ff4200/0x30c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 21749760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8546000/0x0/0x4ffc00000, data 0x3053200/0x3128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415883 data_alloc: 234881024 data_used: 16392192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 20717568 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b867465a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b86746780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86746960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b86746b40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7c5f000/0x0/0x4ffc00000, data 0x393a200/0x3a0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522536 data_alloc: 234881024 data_used: 16547840
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b86746d20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139800 session 0x558b86747a40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139c00 session 0x558b84410d20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110125056 unmapped: 20561920 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b84408b40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b83c53680
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7ad6000/0x0/0x4ffc00000, data 0x3ac2210/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530322 data_alloc: 234881024 data_used: 16613376
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 20406272 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b83c52960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.731391907s of 15.542462349s, submitted: 209
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 20398080 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530454 data_alloc: 234881024 data_used: 16613376
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x3e78210/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528914 data_alloc: 234881024 data_used: 16613376
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 20381696 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110362624 unmapped: 20324352 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 19881984 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 18202624 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114016256 unmapped: 16670720 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566994 data_alloc: 234881024 data_used: 21929984
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 16654336 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.591274261s of 17.624994278s, submitted: 3
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 16547840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 16539648 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567522 data_alloc: 234881024 data_used: 21929984
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x3e7b210/0x3f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.202155113s of 10.243181229s, submitted: 7
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1567254 data_alloc: 234881024 data_used: 21929984
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 16523264 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7718000/0x0/0x4ffc00000, data 0x3e80210/0x3f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 16515072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b86285680
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c48000 session 0x558b862852c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b849272c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b86377e00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624be00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8521f680
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8ebc000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336665 data_alloc: 234881024 data_used: 13844480
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x267816b/0x274a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336445 data_alloc: 234881024 data_used: 13844480
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 110878720 unmapped: 19808256 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.562313080s of 15.728222847s, submitted: 35
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 19046400 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113491968 unmapped: 17195008 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2af716b/0x2bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 16687104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385755 data_alloc: 234881024 data_used: 14766080
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 18096128 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a8d000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386715 data_alloc: 234881024 data_used: 14835712
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.230500221s of 14.482069969s, submitted: 64
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 18079744 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384779 data_alloc: 234881024 data_used: 14835712
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 18071552 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85e1c800 session 0x558b8624af00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a5a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b85bbc3c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85bbcf00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.310770035s of 11.319570541s, submitted: 1
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8a93000/0x0/0x4ffc00000, data 0x2b0916b/0x2bdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b85bbda40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138800 session 0x558b85de6000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8634b4a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b867461e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b867465a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b86746960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fa780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112427008 unmapped: 18259968 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b862fb0e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624b680
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b8624a3c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497051 data_alloc: 234881024 data_used: 14835712
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b8624bc20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138c00 session 0x558b8624a000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b8624a780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 18735104 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b83ca3a40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 18677760 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112025600 unmapped: 18661376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 18612224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85a8bc00 session 0x558b83ca2000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502491 data_alloc: 234881024 data_used: 15368192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e81000/0x0/0x4ffc00000, data 0x371a17b/0x37ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18595840 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.251416206s of 11.426207542s, submitted: 28
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 18563072 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2400 session 0x558b862abe00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504272 data_alloc: 234881024 data_used: 15368192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 18554880 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 18366464 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521392 data_alloc: 234881024 data_used: 17547264
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114794496 unmapped: 15892480 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 15409152 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 15400960 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.764178276s of 10.793314934s, submitted: 4
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1552128 data_alloc: 234881024 data_used: 21549056
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 15155200 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 13492224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7e80000/0x0/0x4ffc00000, data 0x371a19e/0x37ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118415360 unmapped: 12271616 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b861f5c00 session 0x558b8311e1e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b849774a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c46c00 session 0x558b86285c20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f833a000/0x0/0x4ffc00000, data 0x326017b/0x3333000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511816 data_alloc: 234881024 data_used: 21544960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 15089664 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88138400 session 0x558b849b9860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.645172119s of 11.731092453s, submitted: 27
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b88139000 session 0x558b85de3a40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 18604032 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b83c47400 session 0x558b85fc25a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389946 data_alloc: 234881024 data_used: 17207296
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18579456 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8f69000/0x0/0x4ffc00000, data 0x2632158/0x2704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.237525940s of 16.362010956s, submitted: 31
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8eab000/0x0/0x4ffc00000, data 0x26f1158/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,2])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119226368 unmapped: 11460608 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 11444224 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f850d000/0x0/0x4ffc00000, data 0x3081158/0x3153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 11288576 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484722 data_alloc: 234881024 data_used: 17747968
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8476000/0x0/0x4ffc00000, data 0x311d158/0x31ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 11264000 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea2000 session 0x558b84974000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b85ea3400 session 0x558b863774a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478306 data_alloc: 234881024 data_used: 17756160
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114073600 unmapped: 16613376 heap: 130686976 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f845f000/0x0/0x4ffc00000, data 0x313d158/0x320f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 ms_handle_reset con 0x558b84a6f800 session 0x558b8521e960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 33947648 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.629374504s of 10.309167862s, submitted: 151
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b83d11680
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88138400 session 0x558b84410d20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b84927e00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8634a780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b84926b40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b85244960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85a8bc00 session 0x558b852785a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b84a6f800 session 0x558b83c53860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea2000 session 0x558b83c52780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b85ea3400 session 0x558b8624bc20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624a000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 ms_handle_reset con 0x558b88139000 session 0x558b8624b4a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8c1c000/0x0/0x4ffc00000, data 0x297b13e/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 33832960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1377459 data_alloc: 234881024 data_used: 11022336
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 33841152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139800 session 0x558b849274a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b862fa960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b8624af00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85a8bc00 session 0x558b849774a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98b9/0x20ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253805 data_alloc: 218103808 data_used: 4730880
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b85de61e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 39034880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 39026688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269378 data_alloc: 218103808 data_used: 6901760
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b88139c00 session 0x558b86284d20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f95bf000/0x0/0x4ffc00000, data 0x1fd98dc/0x20af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.745358467s of 14.380681038s, submitted: 113
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 35487744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b83d4a1e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea3400 session 0x558b84a2cf00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2400 session 0x558b84927e00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b84a6f800 session 0x558b844101e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 ms_handle_reset con 0x558b85ea2000 session 0x558b84408000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294828 data_alloc: 218103808 data_used: 8527872
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3a1/0x2135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b8521ed20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 35438592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 35422208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 35414016 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301826 data_alloc: 218103808 data_used: 9060352
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 35405824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f9538000/0x0/0x4ffc00000, data 0x205d3c4/0x2136000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302306 data_alloc: 218103808 data_used: 9072640
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35397632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.471420288s of 31.809257507s, submitted: 56
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 115089408 unmapped: 32382976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fdd000/0x0/0x4ffc00000, data 0x25b83c4/0x2691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 32808960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1359190 data_alloc: 234881024 data_used: 9846784
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 32759808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369410 data_alloc: 234881024 data_used: 9990144
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 32866304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x25cd3c4/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 32505856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114466816 unmapped: 33005568 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b19000/0x0/0x4ffc00000, data 0x2a7c3c4/0x2b55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.458605766s of 11.824682236s, submitted: 72
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404216 data_alloc: 234881024 data_used: 10186752
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 8914 writes, 35K keys, 8914 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8914 writes, 2261 syncs, 3.94 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1912 writes, 7094 keys, 1912 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 1912 writes, 777 syncs, 2.46 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 32972800 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 32964608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: mgrc ms_handle_reset ms_handle_reset con 0x558b85ea3000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec  3 02:45:14 compute-0 ceph-osd[208731]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: mgrc handle_mgr_configure stats_period=5
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411348 data_alloc: 234881024 data_used: 10186752
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.394577026s of 18.451017380s, submitted: 14
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411668 data_alloc: 234881024 data_used: 10194944
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0d000/0x0/0x4ffc00000, data 0x2a883c4/0x2b61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411304 data_alloc: 234881024 data_used: 10215424
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b85de72c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b85de74a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 32735232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8b0b000/0x0/0x4ffc00000, data 0x2a893c4/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b862852c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 33931264 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 33923072 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 33914880 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b8634af00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321090 data_alloc: 218103808 data_used: 7651328
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931c000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 69.777755737s of 69.905036926s, submitted: 21
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113565696 unmapped: 33906688 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 33890304 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 33865728 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113647616 unmapped: 33824768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 33783808 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 33759232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f931d000/0x0/0x4ffc00000, data 0x227a391/0x2351000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 33751040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322354 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 33742848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86746780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b86284b40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b849774a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 33734656 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86747a40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.146961212s of 54.832401276s, submitted: 108
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b84a2cf00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b83df8b40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85fc2b40
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b86285860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b8521e960
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33701888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139800 session 0x558b83d4a1e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b85278780
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891d000/0x0/0x4ffc00000, data 0x2c7a391/0x2d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b85279e00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393042 data_alloc: 218103808 data_used: 7688192
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea3400 session 0x558b84409860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33693696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114376704 unmapped: 33095680 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1446880 data_alloc: 234881024 data_used: 15056896
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 29958144 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 29753344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117727232 unmapped: 29745152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 29736960 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 29728768 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f891c000/0x0/0x4ffc00000, data 0x2c7a3a1/0x2d52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 29720576 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.778945923s of 44.865009308s, submitted: 6
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 26681344 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8588000/0x0/0x4ffc00000, data 0x300e3a1/0x30e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 28508160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 28499968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 28491776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 28483584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 28475392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 28467200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 28459008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 28450816 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501642 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 28442624 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 112.571006775s of 112.683082581s, submitted: 22
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 28434432 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 28426240 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 28418048 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501114 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857d000/0x0/0x4ffc00000, data 0x30193a1/0x30f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.938446045s of 35.963088989s, submitted: 3
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 28409856 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 28401664 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 28393472 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 28385280 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 28377088 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119103488 unmapped: 28368896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119111680 unmapped: 28360704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 28352512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 28344320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 28336128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501422 data_alloc: 234881024 data_used: 18173952
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 28327936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 28319744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119160832 unmapped: 28311552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501582 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301a3a1/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 189.647811890s of 189.655746460s, submitted: 1
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 28303360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 28295168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119185408 unmapped: 28286976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119193600 unmapped: 28278784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119201792 unmapped: 28270592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 28262400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 28254208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9225 writes, 35K keys, 9225 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9225 writes, 2410 syncs, 3.83 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 311 writes, 768 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 311 writes, 149 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 28237824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 28229632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 28213248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 28205056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 28196864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 28188672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501890 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8579000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 28180480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 163.277404785s of 163.285751343s, submitted: 1
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119308288 unmapped: 28164096 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 28147712 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 28123136 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 28057600 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 28041216 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501538 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f857b000/0x0/0x4ffc00000, data 0x301b3a1/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119439360 unmapped: 28033024 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.532619476s of 50.152313232s, submitted: 90
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139c00 session 0x558b83ca5c20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ec6c00 session 0x558b849b8000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 28000256 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501242 data_alloc: 234881024 data_used: 18178048
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b83c47000 session 0x558b86260f00
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b88139000 session 0x558b86284000
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b879ce000 session 0x558b862aa5a0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8ab9000/0x0/0x4ffc00000, data 0x2add31c/0x2bb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 26927104 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450062 data_alloc: 234881024 data_used: 17440768
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970065117s of 11.374399185s, submitted: 57
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 ms_handle_reset con 0x558b85ea2000 session 0x558b844112c0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 33382400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 34086912 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f985c000/0x0/0x4ffc00000, data 0x1d3d30c/0x1e12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278563 data_alloc: 218103808 data_used: 6950912
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 34078720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.220892906s of 11.268563271s, submitted: 8
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 34070528 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 139 ms_handle_reset con 0x558b83c47000 session 0x558b863761e0
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x153eeba/0x1614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 41213952 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 140 ms_handle_reset con 0x558b85ec6c00 session 0x558b86285c20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 41205760 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 41156608 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b879ce000 session 0x558b85fc3c20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214842 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa053000/0x0/0x4ffc00000, data 0x1542634/0x161a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.120633125s of 12.398234367s, submitted: 53
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 ms_handle_reset con 0x558b88139000 session 0x558b83ca3c20
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217816 data_alloc: 218103808 data_used: 143360
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 41148416 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 40976384 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 41009152 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf dump' '{prefix=perf dump}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf schema' '{prefix=perf schema}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 ms_handle_reset con 0x558b84a6f800 session 0x558b83e61860
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106528768 unmapped: 40943616 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106536960 unmapped: 40935424 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 40927232 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106553344 unmapped: 40919040 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 40910848 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 40894464 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106586112 unmapped: 40886272 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 40878080 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 40869888 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106610688 unmapped: 40861696 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 40853504 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 40845312 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 9641 writes, 36K keys, 9641 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9641 writes, 2604 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 416 writes, 951 keys, 416 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 416 writes, 194 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106635264 unmapped: 40837120 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106643456 unmapped: 40828928 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106651648 unmapped: 40820736 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 40812544 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 40804352 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 40796160 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 40787968 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 40779776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 40779776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 40779776 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 40771584 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 40763392 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa050000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218136 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 515.138610840s of 515.162719727s, submitted: 14
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 40755200 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106725376 unmapped: 40747008 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 40706048 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 40656896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 40648704 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 40640512 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106840064 unmapped: 40632320 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106848256 unmapped: 40624128 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106856448 unmapped: 40615936 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 40607744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 40607744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 40607744 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 40599552 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 40591360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 40591360 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 40583168 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 40574976 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 40566784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 40566784 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 40558592 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106921984 unmapped: 40550400 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 40542208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 40542208 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 40525824 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106954752 unmapped: 40517632 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 40509440 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 40501248 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 40493056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 40493056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 40493056 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 40484864 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106995712 unmapped: 40476672 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 40468480 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa051000/0x0/0x4ffc00000, data 0x15440b7/0x161d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 40460288 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 40460288 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 40222720 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:14 compute-0 ceph-osd[208731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:14 compute-0 ceph-osd[208731]: bluestore.MempoolThread(0x558b822ebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217256 data_alloc: 218103808 data_used: 151552
Dec  3 02:45:14 compute-0 ceph-osd[208731]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 40656896 heap: 147472384 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:14 compute-0 ceph-osd[208731]: do_command 'log dump' '{prefix=log dump}'
Dec  3 02:45:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  3 02:45:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672553893' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  3 02:45:14 compute-0 rsyslogd[188612]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 02:45:14 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15945 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:14 compute-0 nova_compute[351485]: 2025-12-03 02:45:14.758 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:45:14 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  3 02:45:14 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468121917' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  3 02:45:14 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15949 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:15 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15953 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 02:45:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  3 02:45:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471470328' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  3 02:45:15 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15955 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 02:45:15 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  3 02:45:15 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/900289867' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  3 02:45:16 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:16 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15959 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 02:45:16 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  3 02:45:16 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529789203' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  3 02:45:17 compute-0 ceph-mgr[193109]: log_channel(audit) log [DBG] : from='client.15967 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 02:45:17 compute-0 ceph-3765feb2-36f8-5b86-b74c-64e9221f9c4c-mgr-compute-0-rysove[193105]: 2025-12-03T02:45:17.081+0000 7fabb0026640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:45:17 compute-0 ceph-mgr[193109]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 02:45:17 compute-0 nova_compute[351485]: 2025-12-03 02:45:17.383 351492 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  3 02:45:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  3 02:45:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671506822' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  3 02:45:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  3 02:45:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877054004' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  3 02:45:17 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  3 02:45:17 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744788624' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  3 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  3 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850072463' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  3 02:45:18 compute-0 ceph-mgr[193109]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  3 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  3 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1084016343' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  3 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  3 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236038443' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  3 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec  3 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507683952' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  3 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  3 02:45:18 compute-0 ceph-mon[192821]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec  3 02:45:18 compute-0 ceph-mon[192821]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26662517' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367004 data_alloc: 218103808 data_used: 16027648
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f901d000/0x0/0x4ffc00000, data 0x2586f05/0x2651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 32702464 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75bd0e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.323188782s of 99.935211182s, submitted: 90
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3d000 session 0x55f0a529c960
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3a800 session 0x55f0a57e3860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 112738304 unmapped: 32669696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9d79000/0x0/0x4ffc00000, data 0x182af05/0x18f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a75c1680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198259 data_alloc: 218103808 data_used: 8048640
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f9db7000/0x0/0x4ffc00000, data 0x17eee93/0x18b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.926574707s of 19.366115570s, submitted: 65
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b3dc00 session 0x55f0a7f5bc20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a57ab000 session 0x55f0a56af2c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107528192 unmapped: 37879808 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 38379520 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a553f680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107012096 unmapped: 38395904 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051955 data_alloc: 218103808 data_used: 7053312
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bfe31/0x987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 85.550582886s of 85.862503052s, submitted: 51
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 134 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbbc20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107020288 unmapped: 38387712 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107044864 unmapped: 38363136 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142798 data_alloc: 218103808 data_used: 7061504
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a4d42b40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146916 data_alloc: 218103808 data_used: 7069696
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 38354944 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a7ecc1e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a7ecc3c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 31768576 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167396 data_alloc: 218103808 data_used: 13885440
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.427654266s of 53.600463867s, submitted: 15
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 27410432 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa06f000/0x0/0x4ffc00000, data 0x153354e/0x15fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a726d860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a726d680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a553e000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a75b83c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 31604736 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6b400 session 0x55f0a7cbba40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38c00 session 0x55f0a64152c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3f800 session 0x55f0a80205a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a57aa000 session 0x55f0a756c780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6bbdc00 session 0x55f0a54dcf00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6a400 session 0x55f0a56aed20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303610 data_alloc: 218103808 data_used: 13885440
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113991680 unmapped: 31416320 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a54dd4a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a8b46780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 31432704 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a8df6b40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a4d7d680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8eec000/0x0/0x4ffc00000, data 0x26b655e/0x2782000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8592800 session 0x55f0a5578d20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a4b37c20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a84fa780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a7eb74a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114302976 unmapped: 31105024 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a52a65a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a885b2c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114327552 unmapped: 31080448 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114343936 unmapped: 31064064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400757 data_alloc: 218103808 data_used: 13889536
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 31055872 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290942192s of 10.699803352s, submitted: 40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a58825a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e71400 session 0x55f0a810bc20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 31350784 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a81732c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a68000/0x0/0x4ffc00000, data 0x3b3956e/0x3c06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467170 data_alloc: 218103808 data_used: 13893632
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 31907840 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 113508352 unmapped: 31899648 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 31236096 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a3e000/0x0/0x4ffc00000, data 0x3b6356e/0x3c30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516450 data_alloc: 234881024 data_used: 20791296
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 114737152 unmapped: 30670848 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7450400 session 0x55f0a7eb72c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 28491776 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.427840233s of 10.600404739s, submitted: 22
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 23896064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 125722624 unmapped: 19685376 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 14385152 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1680281 data_alloc: 251658240 data_used: 40378368
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 12632064 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858e800 session 0x55f0a96b8f00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591c00 session 0x55f0a7eced20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 12574720 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7a14000/0x0/0x4ffc00000, data 0x3b8d56e/0x3c5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3c000 session 0x55f0a6afd680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1544478 data_alloc: 234881024 data_used: 33796096
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 15441920 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133218304 unmapped: 12189696 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83f5000/0x0/0x4ffc00000, data 0x31ad55e/0x3279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 11264000 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651e000 session 0x55f0a57e6780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.212381363s of 13.371566772s, submitted: 31
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7ec3c00 session 0x55f0a8571e00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596346 data_alloc: 251658240 data_used: 41136128
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 14958592 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594c00 session 0x55f0a57e23c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1475072 data_alloc: 234881024 data_used: 32735232
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c24000/0x0/0x4ffc00000, data 0x297e55e/0x2a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 14942208 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.805484772s of 13.987822533s, submitted: 30
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1533936 data_alloc: 234881024 data_used: 33460224
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85de000/0x0/0x4ffc00000, data 0x2fc455e/0x3090000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 10280960 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 11657216 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585670 data_alloc: 234881024 data_used: 34004992
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134864896 unmapped: 10543104 heap: 145408000 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8073000/0x0/0x4ffc00000, data 0x352955e/0x35f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858d800 session 0x55f0a60fc780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135520256 unmapped: 20389888 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137887744 unmapped: 18022400 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f780e000/0x0/0x4ffc00000, data 0x3d8555e/0x3e51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667382 data_alloc: 251658240 data_used: 34631680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a4b37e00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 17997824 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8597000 session 0x55f0a810bc20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.885555267s of 13.601085663s, submitted: 132
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 18956288 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f781b000/0x0/0x4ffc00000, data 0x3d8755e/0x3e53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8594000 session 0x55f0a810b0e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a810a000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665953 data_alloc: 251658240 data_used: 34635776
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 18792448 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f6000/0x0/0x4ffc00000, data 0x3dab56e/0x3e78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665621 data_alloc: 251658240 data_used: 34635776
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 18784256 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 18833408 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 17981440 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1695221 data_alloc: 251658240 data_used: 38678528
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 16302080 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140410880 unmapped: 15499264 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717141 data_alloc: 251658240 data_used: 41816064
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 15532032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717461 data_alloc: 251658240 data_used: 41824256
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.360565186s of 30.452342987s, submitted: 11
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1717637 data_alloc: 251658240 data_used: 41824256
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 15523840 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x3dac56e/0x3e79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 15360000 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a8df70e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a81721e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 15335424 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8591400 session 0x55f0a75ba780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e6cc00 session 0x55f0a81734a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b41800 session 0x55f0a6414780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1549753 data_alloc: 234881024 data_used: 34029568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 19628032 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 19611648 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 19603456 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136134656 unmapped: 19775488 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550873 data_alloc: 234881024 data_used: 34156544
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2fb656e/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 19767296 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.911537170s of 15.160791397s, submitted: 53
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 20701184 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 18833408 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1623403 data_alloc: 234881024 data_used: 34353152
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e77000/0x0/0x4ffc00000, data 0x372956e/0x37f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e77000/0x0/0x4ffc00000, data 0x372956e/0x37f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 19275776 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621595 data_alloc: 234881024 data_used: 34357248
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e55000/0x0/0x4ffc00000, data 0x374c56e/0x3819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621595 data_alloc: 234881024 data_used: 34357248
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 20406272 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.919089317s of 14.367115021s, submitted: 74
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621715 data_alloc: 234881024 data_used: 34357248
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19357696 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621715 data_alloc: 234881024 data_used: 34357248
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x375656e/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 19349504 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.317792892s of 11.335372925s, submitted: 3
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a529b2c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a56acd20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a7eb61e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 20971520 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a632b0e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a858d000 session 0x55f0a7dee960
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20987904 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b7000/0x0/0x4ffc00000, data 0x3ce95d0/0x3db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1668235 data_alloc: 234881024 data_used: 34357248
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b7c20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a75b74a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a75b6960
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b7000/0x0/0x4ffc00000, data 0x3ce95d0/0x3db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20987904 heap: 155910144 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593800 session 0x55f0a75b7860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a553fa40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab1800 session 0x55f0a7ecd4a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7ecc1e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a7ecc3c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a52a7e00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f78b6000/0x0/0x4ffc00000, data 0x3ce95e0/0x3db8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135340032 unmapped: 28442624 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a57e32c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a810ba40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 28196864 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651f000 session 0x55f0a75bb680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135618560 unmapped: 28164096 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a810a3c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1772936 data_alloc: 251658240 data_used: 35344384
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135921664 unmapped: 27860992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x485b665/0x492c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38400 session 0x55f0a58901e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a60110e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 27852800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b38400 session 0x55f0a4d42780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765680313s of 11.383768082s, submitted: 98
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7eccd20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136249344 unmapped: 27533312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777362 data_alloc: 251658240 data_used: 35352576
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136265728 unmapped: 27516928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777682 data_alloc: 251658240 data_used: 35360768
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 136896512 unmapped: 26886144 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137576448 unmapped: 26206208 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1818642 data_alloc: 251658240 data_used: 41193472
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x487f675/0x4951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139427840 unmapped: 24354816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 23789568 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 18759680 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a651f000 session 0x55f0a6acf4a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a6ab0800 session 0x55f0a75c21e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.797070503s of 15.820782661s, submitted: 2
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1866786 data_alloc: 251658240 data_used: 48082944
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a8593c00 session 0x55f0a7ece5a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f748f000/0x0/0x4ffc00000, data 0x3d0e5f3/0x3ddd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f748f000/0x0/0x4ffc00000, data 0x3d0e5f3/0x3ddd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1735401 data_alloc: 251658240 data_used: 41189376
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142245888 unmapped: 21536768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a7eb7a40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a60c9000 session 0x55f0a6b8fa40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a7e73000 session 0x55f0a6b8f0e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8aa1000/0x0/0x4ffc00000, data 0x2aff5e3/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8acb000/0x0/0x4ffc00000, data 0x2ad55e3/0x2ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491587 data_alloc: 234881024 data_used: 27148288
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 131342336 unmapped: 32440320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.834514618s of 23.068130493s, submitted: 47
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134021120 unmapped: 29761536 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 29679616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8139000/0x0/0x4ffc00000, data 0x34675e3/0x3535000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1571961 data_alloc: 234881024 data_used: 27537408
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 29130752 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1578329 data_alloc: 234881024 data_used: 27856896
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f812f000/0x0/0x4ffc00000, data 0x34715e3/0x353f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 29065216 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4d40c00 session 0x55f0a80210e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a56d1c00 session 0x55f0a885ad20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f812f000/0x0/0x4ffc00000, data 0x34715e3/0x353f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 34136064 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a529d4a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8c71000/0x0/0x4ffc00000, data 0x251e56e/0x25eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129687552 unmapped: 34095104 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.617131233s of 10.342635155s, submitted: 143
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a75bb680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a7c74800 session 0x55f0a56ad860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 34111488 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a858fc00 session 0x55f0a553fa40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b7c20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 34603008 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b74a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a4b3cc00 session 0x55f0a7ece780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a5f18400 session 0x55f0a6b8e960
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a7c74800 session 0x55f0a75c3860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484659 data_alloc: 234881024 data_used: 21315584
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 ms_handle_reset con 0x55f0a858fc00 session 0x55f0a562e5a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2bf40fb/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 34578432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2bf40fb/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 34643968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a56d1800 session 0x55f0a810a5a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a858d800 session 0x55f0a756c780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a529c000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a75b8b40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f9584000/0x0/0x4ffc00000, data 0x1c0acac/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307635 data_alloc: 218103808 data_used: 13893632
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a8595c00 session 0x55f0a7eb7e00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a7cbb860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f9584000/0x0/0x4ffc00000, data 0x1c0acac/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 40632320 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a858ec00 session 0x55f0a7eb61e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a4b37e00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6955400 session 0x55f0a80d1e00
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a785b000 session 0x55f0a6b8e780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a7eb74a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 40861696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314319 data_alloc: 218103808 data_used: 14106624
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 40902656 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.097565651s of 12.628091812s, submitted: 91
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a75b92c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6955400 session 0x55f0a5897860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f955a000/0x0/0x4ffc00000, data 0x1c34cbc/0x1d04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123027456 unmapped: 40755200 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a7e6f000 session 0x55f0a726c960
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 39845888 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 ms_handle_reset con 0x55f0a6bbf000 session 0x55f0a8b463c0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b38800 session 0x55f0a6aced20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436171 data_alloc: 234881024 data_used: 20680704
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b39c00 session 0x55f0a6afcb40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124067840 unmapped: 39714816 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a858c000 session 0x55f0a75ba960
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8590c00 session 0x55f0a6acf680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 39698432 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437451 data_alloc: 234881024 data_used: 20795392
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 39149568 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 38887424 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 35733504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 35725312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 35725312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1491211 data_alloc: 234881024 data_used: 28409856
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8ca7000/0x0/0x4ffc00000, data 0x24e571f/0x25b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 128065536 unmapped: 35717120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.258148193s of 33.476356506s, submitted: 37
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 138297344 unmapped: 25485312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1608251 data_alloc: 234881024 data_used: 29712384
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7e73000/0x0/0x4ffc00000, data 0x330c71f/0x33dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 138543104 unmapped: 25239552 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 26656768 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1621193 data_alloc: 234881024 data_used: 29835264
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7dc7000/0x0/0x4ffc00000, data 0x33be71f/0x348f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 26640384 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 2973 syncs, 3.71 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2088 writes, 7681 keys, 2088 commit groups, 1.0 writes per commit group, ingest: 7.32 MB, 0.01 MB/s#012Interval WAL: 2088 writes, 866 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 26607616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.264138222s of 10.018401146s, submitted: 173
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 25812992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7bc4000/0x0/0x4ffc00000, data 0x35c371f/0x3694000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1633635 data_alloc: 234881024 data_used: 30130176
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 25812992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b3a000/0x0/0x4ffc00000, data 0x364a71f/0x371b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 26673152 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1641113 data_alloc: 234881024 data_used: 29941760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a7c73c00 session 0x55f0a80d0780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: mgrc ms_handle_reset ms_handle_reset con 0x55f0a651e800
Dec  3 02:45:19 compute-0 ceph-osd[207705]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1922561230
Dec  3 02:45:19 compute-0 ceph-osd[207705]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1922561230,v1:192.168.122.100:6801/1922561230]
Dec  3 02:45:19 compute-0 ceph-osd[207705]: mgrc handle_mgr_configure stats_period=5
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137158656 unmapped: 26624000 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4d41c00 session 0x55f0a56ac000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137158656 unmapped: 26624000 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a6ab1400 session 0x55f0a57e7860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b33000/0x0/0x4ffc00000, data 0x365a71f/0x372b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.565891266s of 10.747445107s, submitted: 33
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636453 data_alloc: 234881024 data_used: 29941760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x365d71f/0x372e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137330688 unmapped: 26451968 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636453 data_alloc: 234881024 data_used: 29941760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b30000/0x0/0x4ffc00000, data 0x365d71f/0x372e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.241274834s of 10.265392303s, submitted: 5
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1636785 data_alloc: 234881024 data_used: 29941760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b2d000/0x0/0x4ffc00000, data 0x366071f/0x3731000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 26443776 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b2d000/0x0/0x4ffc00000, data 0x366071f/0x3731000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26435584 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638097 data_alloc: 234881024 data_used: 29954048
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 26394624 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a7eb6d20
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8593400 session 0x55f0a6afda40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 26427392 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a9f58800 session 0x55f0a6afd860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409806 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852160454s of 12.121880531s, submitted: 48
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 29908992 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133881856 unmapped: 29900800 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 29892608 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409982 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 29884416 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 29884416 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.712953568s of 30.721988678s, submitted: 1
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 29851648 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 29843456 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 29835264 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a4b40800 session 0x55f0a8544b40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411390 data_alloc: 234881024 data_used: 21741568
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133955584 unmapped: 29827072 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 29818880 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133963776 unmapped: 29818880 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.298162460s of 26.320926666s, submitted: 8
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 29794304 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 29777920 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 29753344 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 29679616 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 29622272 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 28573696 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 28565504 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135225344 unmapped: 28557312 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 28549120 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410126 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 28540928 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8596000 session 0x55f0a7ecd680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a4b37680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a6415680
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f912b000/0x0/0x4ffc00000, data 0x206370f/0x2133000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135249920 unmapped: 28532736 heap: 163782656 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a529c5a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.039604187s of 54.708065033s, submitted: 110
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8593400 session 0x55f0a96b8b40
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a8596000 session 0x55f0a7def0e0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a9f58800 session 0x55f0a5529860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a60c7000 session 0x55f0a810b4a0
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a80ed400 session 0x55f0a8172780
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502243 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135708672 unmapped: 31752192 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 31744000 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8740000/0x0/0x4ffc00000, data 0x2a4c781/0x2b1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 31744000 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 31735808 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1502243 data_alloc: 234881024 data_used: 21749760
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 31735808 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 ms_handle_reset con 0x55f0a5768c00 session 0x55f0a4b37860
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 32088064 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 32743424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1527149 data_alloc: 234881024 data_used: 25067520
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134717440 unmapped: 32743424 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 134979584 unmapped: 32481280 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1567309 data_alloc: 234881024 data_used: 30720000
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8716000/0x0/0x4ffc00000, data 0x2a76781/0x2b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 29900800 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.706817627s of 44.924095154s, submitted: 44
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142917632 unmapped: 24543232 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143196160 unmapped: 24264704 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 141459456 unmapped: 26001408 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678761 data_alloc: 234881024 data_used: 31834112
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7d000/0x0/0x4ffc00000, data 0x3606781/0x36d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7d000/0x0/0x4ffc00000, data 0x3606781/0x36d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 24707072 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1678761 data_alloc: 234881024 data_used: 31834112
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673013 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b7b000/0x0/0x4ffc00000, data 0x3611781/0x36e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673013 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.688020706s of 20.183700562s, submitted: 135
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 24453120 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 24444928 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143024128 unmapped: 24436736 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1672881 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.763500214s of 43.778255463s, submitted: 2
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 24428544 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143040512 unmapped: 24420352 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143048704 unmapped: 24412160 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143056896 unmapped: 24403968 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 24395776 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b78000/0x0/0x4ffc00000, data 0x3614781/0x36e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143073280 unmapped: 24387584 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673585 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 54.550392151s of 54.573677063s, submitted: 5
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673149 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673149 data_alloc: 234881024 data_used: 31838208
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b61000/0x0/0x4ffc00000, data 0x362b781/0x36fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673309 data_alloc: 234881024 data_used: 31842304
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143261696 unmapped: 24199168 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.851950645s of 15.870507240s, submitted: 2
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143343616 unmapped: 24117248 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143351808 unmapped: 24109056 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 24100864 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1673729 data_alloc: 234881024 data_used: 31842304
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.315594673s of 14.335906982s, submitted: 2
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143384576 unmapped: 24076288 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143392768 unmapped: 24068096 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 24059904 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143409152 unmapped: 24051712 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 24043520 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143425536 unmapped: 24035328 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 24027136 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 24018944 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 02:45:19 compute-0 ceph-osd[207705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  3 02:45:19 compute-0 ceph-osd[207705]: bluestore.MempoolThread(0x55f0a3e3bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1676529 data_alloc: 234881024 data_used: 31830016
Dec  3 02:45:19 compute-0 ceph-osd[207705]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7b4c000/0x0/0x4ffc00000, data 0x3640781/0x3712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  3 02:45:19 compute-0 ceph-osd[207705]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 24010752 heap: 167460864 old mem: 2845415832 new mem: 2845415832
